The Ultimate Guide to Making HTTP Requests in Node.js with Fetch

As a web developer, making HTTP requests is a critical part of interacting with APIs, scraping web pages, and fetching data from various sources. In the world of Node.js, there are several ways to send HTTP requests, but in this guide we‘ll dive deep into the Fetch API – a modern, promise-based solution.

We‘ll cover everything you need to know to start making effective HTTP requests with Fetch in your Node.js projects. From basic GET and POST requests to handling errors, using proxies, and even some web scraping – this guide has you covered.

By the end of this article, you‘ll be equipped with the knowledge and code samples to fetch data from any source on the web with speed and reliability. Let‘s jump in!

Why Use Fetch for HTTP Requests?

Fetch is a relatively new API that provides a cleaner, more modern way of making asynchronous HTTP requests compared to legacy solutions like the built-in http module or libraries like Request.

Some of the key benefits of Fetch include:

  • Native to web standards and supported in all modern browsers
  • Promise-based API that avoids callback hell and works seamlessly with async/await
  • Simpler syntax compared to older libraries
  • Provides a single interface for working with HTTP requests and responses

As of Node.js v18, Fetch is now supported natively without any additional dependencies. This makes it an attractive choice for developers who want an easy, standardized way of interacting with HTTP.

Fetch vs Axios Comparison

When it comes to making HTTP requests in Node.js, Fetch‘s main rival is the ever-popular Axios library. While both libraries are promise-based and provide a high-level API for working with requests/responses, there are some key differences:

Feature Fetch Axios
Browser support
Node.js support ✅ (v18+)
Promises
Request/response interceptors
Request cancelation
Built-in XSRF protection
Automatic JSON parsing
Download progress

As you can see, Axios has a few extra features like interceptors, XSRF protection, and progress tracking that Fetch lacks. However, many of these features can be implemented with Fetch as well with a bit of extra code.

Axios has been around a lot longer than Fetch and is still the most widely used HTTP client in Node.js. According to the 2022 State of JS survey, Axios was used by 78% of respondents compared to just 13% for Fetch.

However, with the addition of Fetch to Node.js core and its availability in all modern browsers, it‘s quickly gaining adoption. In my opinion, Fetch is a great choice for most use cases, especially if you‘re looking for a lightweight, standardized solution without extra dependencies.

Making GET Requests with Fetch

Now that we‘ve covered the "why" of Fetch, let‘s jump into the "how". We‘ll start with the most common type of HTTP request – the GET request.

Here‘s a basic example of making a GET request to the JSON Placeholder API to fetch some sample data:

fetch(‘https://jsonplaceholder.typicode.com/posts‘)
  .then(response => response.json())
  .then(data => console.log(data));

Let‘s break this down step-by-step:

  1. We call the global fetch function and pass it the URL we want to send the request to. This initiates the request and returns a promise that will resolve to the response.

  2. Once the response is received, the first .then() block is called with the Response object. We call the .json() method on the response to parse the JSON data in the response body into a JavaScript object.

  3. The .json() method also returns a promise, so we chain another .then() block to handle the parsed data. Here we simply log the data to the console, but in a real app you would process and use the data in some way.

One important thing to note about the Response object is that it‘s a streaming interface. This means that even if the full response body hasn‘t been received yet, you can still access parts of it as they come in.

For example, you could use the .body property to get a readable stream of the response data and process it chunk-by-chunk. This is useful for handling large responses or data streams.

Handling Errors and Status Codes

When making HTTP requests, it‘s important to handle errors and non-2xx status codes that may be returned by the server. With Fetch, you can check the ok property of the response to determine if the request was successful:

fetch(‘https://jsonplaceholder.typicode.com/invalid-url‘)
  .then(response => {
    if (!response.ok) {
      throw new Error(`HTTP error! status: ${response.status}`);
    }
    return response.json();
  })
  .then(data => console.log(data))
  .catch(error => console.log(error.message));

Here, if the response status code is not in the 2xx range, we throw an error with a message that includes the status code. This error is then caught in the .catch() block and logged to the console.

By throwing an error in the first .then() block, we prevent the second .then() from being called if the request was unsuccessful. This is a common pattern for handling errors with Fetch.

It‘s also worth noting that Fetch considers network errors and invalid URLs to be unrecoverable errors. In these cases, the promise returned by fetch() will reject immediately and skip any .then() blocks, going straight to the .catch().

To handle different types of errors and status codes, you can use a switch statement or if/else blocks to check the response.status and act accordingly.

Making POST Requests with Fetch

In addition to retrieving data with GET requests, Fetch also supports sending data to APIs with POST requests. This is commonly used for creating or updating resources.

Here‘s an example of using Fetch to create a new blog post via a REST API:

const post = {
  title: ‘My New Post‘,
  body: ‘This is the body of my new post‘,
  userId: 1
};

fetch(‘https://jsonplaceholder.typicode.com/posts‘, {
  method: ‘POST‘,
  headers: {
    ‘Content-Type‘: ‘application/json‘
  },
  body: JSON.stringify(post)
})
  .then(response => response.json())
  .then(data => console.log(data));

In this example, we‘re sending a POST request to create a new blog post. Here‘s what‘s happening:

  1. We create an object called post that represents the data we want to send in the request body. This includes the title, body content, and ID of the user creating the post.

  2. We call fetch() with the URL of the API endpoint for creating new posts. As a second argument, we pass an options object that specifies the HTTP method (POST), headers, and request body.

  3. In the headers, we set the Content-Type to application/json to indicate that we‘re sending JSON data in the request body.

  4. To send the data in the request body, we need to convert our post object to a JSON string using JSON.stringify().

  5. Finally, we chain two .then() blocks to parse the JSON response and log the newly created post to the console.

This is just a simple example, but it demonstrates the basic pattern for sending data with POST requests using Fetch. You can use a similar approach for other HTTP methods like PUT, PATCH, and DELETE.

Using Proxies with Fetch

When making HTTP requests, there are situations where you may need to use a proxy server. Common reasons include hiding your IP address, bypassing geo-restrictions, or web scraping.

Unfortunately, Fetch doesn‘t have built-in support for proxies like some other HTTP clients. However, there‘s a workaround that involves using the https-proxy-agent package.

Here‘s an example of using Fetch with a proxy:

import fetch from ‘node-fetch‘;
import HttpsProxyAgent from ‘https-proxy-agent‘;

const proxyUrl = ‘http://123.456.789:8080‘;
const agent = new HttpsProxyAgent(proxyUrl);

fetch(‘https://httpbin.org/ip‘, { agent })
  .then(response => response.text())
  .then(text => console.log(text));

In this code, we first import the node-fetch and https-proxy-agent packages. Then we create an instance of HttpsProxyAgent with the URL of our proxy server.

To use the proxy with Fetch, we pass an options object to fetch() with the agent property set to our proxy agent instance. This tells Fetch to route the request through the specified proxy server.

When you run this code, you‘ll see that the IP address logged is the IP of the proxy server, not your actual IP address.

One thing to keep in mind when using proxies with Fetch or any other HTTP client is to choose a reliable, high-quality proxy provider. Free proxies often have poor performance and uptime.

Some of the top proxy providers for web scraping and other use cases include:

  • Bright Data – Previously Luminati, one of the largest proxy networks with over 72M+ IPs
  • Oxylabs – Provides dedicated datacenter and residential proxies with global coverage
  • Smartproxy – Offers a mix of datacenter and residential proxies at affordable prices
  • IPRoyal – A newer provider with fast proxies and flexible plans

Using a paid proxy service will give you better IP diversity, uptime, and support compared to free or cheap proxies. It‘s worth the investment if you‘re serious about web scraping or need to make a large volume of requests.

Web Scraping with Fetch and Cheerio

One of the most common use cases for making HTTP requests in Node.js is web scraping – extracting data from websites. While Fetch doesn‘t provide any built-in functionality for parsing HTML, it can easily be combined with a library like Cheerio to scrape data from web pages.

Cheerio is a popular library that allows you to parse and traverse HTML documents using a syntax similar to jQuery. Here‘s an example of using Fetch and Cheerio together to scrape data from a website:

import fetch from ‘node-fetch‘;
import cheerio from ‘cheerio‘;

fetch(‘https://example.com‘)
  .then(response => response.text())
  .then(body => {
    const $ = cheerio.load(body);
    const pageTitle = $(‘title‘).text();
    const articleTitles = $(‘h2.article-title‘).map((i, el) => $(el).text()).get();

    console.log(`Page Title: ${pageTitle}`);
    console.log(‘Article Titles:‘, articleTitles);
  });

In this example, we‘re scraping the page title and a list of article titles from a hypothetical news website. Here‘s what‘s happening:

  1. We use Fetch to send a GET request to the URL of the page we want to scrape.

  2. In the first .then() block, we use the .text() method to extract the raw HTML from the response body.

  3. In the second .then() block, we pass the HTML to Cheerio‘s load() function to parse it into a traversable document. We store the result in the $ variable.

  4. We use Cheerio‘s methods to extract the data we want from the page. To get the page title, we select the <title> element and call .text() to get its text content.

  5. To get the article titles, we use a more complex selector to find all the <h2> elements with a class of article-title. We then use Cheerio‘s .map() method to loop through the matched elements and extract their text. Finally, we call .get() to convert the result to a plain array.

  6. We log the extracted data to the console.

This is just a simple example, but it demonstrates how you can use Fetch to retrieve the raw HTML of a web page and then use Cheerio to parse and extract the data you need.

With more complex websites, you may need to handle pagination, wait for dynamic content to load, or bypass bot detection mechanisms. For these cases, you‘ll want to look into using a headless browser like Puppeteer or Playwright in combination with Fetch.

Best Practices for Fetch Requests

To wrap up, let‘s cover some best practices and tips for using Fetch effectively in your Node.js projects:

  1. Always handle errors: Use .catch() blocks or try/catch with async/await to handle network errors, invalid responses, and other issues that may occur.

  2. Set timeouts: Use the AbortController and signal option to set a timeout for your Fetch requests and avoid hanging indefinitely if the server is unresponsive.

  3. Use appropriate headers: Set headers like Content-Type, Authorization, and User-Agent as needed for the API or website you‘re interacting with.

  4. Don‘t abuse APIs: Respect rate limits and terms of service for any APIs you use. Avoid sending too many requests too quickly or using unauthorized endpoints.

  5. Cache responses: If you‘re making repeated requests to the same URL, consider caching the response to avoid unnecessary network traffic and improve performance.

  6. Use HTTPS: Always use HTTPS URLs for your Fetch requests to encrypt data in transit and protect user privacy.

  7. Test thoroughly: Make sure to test your Fetch code with different response types, status codes, and error conditions to ensure it handles all cases gracefully.

By following these best practices and the techniques covered in this guide, you‘ll be well on your way to making effective, reliable HTTP requests with Fetch in your Node.js projects.

Conclusion

In this comprehensive guide, we‘ve covered everything you need to know to start making HTTP requests with the Fetch API in Node.js.

We‘ve explored the benefits of Fetch and how it compares to other popular HTTP clients like Axios. We‘ve walked through detailed examples of how to make GET and POST requests, handle errors and status codes, and use proxies for web scraping and other tasks.

By combining Fetch with libraries like Cheerio, you can scrape websites and extract data easily. And by following best practices like error handling, timeouts, and caching, you can ensure your Fetch code is production-ready.

Whether you‘re building a REST API client, a web scraper, or any other type of Node.js application that needs to interact with the web, Fetch is a powerful tool to have in your toolkit.

So what are you waiting for? Start fetching! 🚀