This topic has been the focus of several discussions over the past few years. Yet, here I am to talk about it again. My approach, however, is going to be a little different from a majority of other articles that I have come across. Instead of focusing on just the how, I will also be focussing on the why part of this question as well — which is usually assumed to be understood.
A proxy, in general, is a server or a service which can introduce additional layers in our communication to obfuscate or modify content, if configured to do so.
A very trivial example of this could be to proxy our IP address while sending out requests for accessing YouTube videos which are currently unavailable in your country.
In the context of web development, our primary goal to use a proxy is to avoid CORS (Cross-Origin Resource Sharing) “issues” which occur because the browsers enforce Same-Origin Policy to protect the users from XSS among several other types of attacks.
Which, in simpler terms, means that the browsers, for security reasons, restrict requests which are not from the same origin as that of the hosted UI, this prevents attackers from injecting code into our application via ads or plugins to steal our credentials or other sensitive information.
Below is an image from MDN which explains what a CORS-enabled web page performs requests.
But how does the server know whether the requests are coming from the same origin or not? In the form of request headers. The browser appends a request header called origin, for cross-origin requests_,_ to denote which origin the request originated from. The server then has the authority to either allow or reject these origins by providing specific response headers which are parsed by the browsers.
For example, when we load the home page of Google, it makes several requests to different origins. An example of cross-origin requests is shown below:
And in the response, we receive the access-control-allow-* headers which enable the cross-origin communication between these two origins.
The access-control-allow-* headers have various responsibilities, the server can define the authentication mechanisms, acceptable header values and HTTP method types permitted via these headers.
With the access-control-allow-origin header missing, our request, although successful, will be blocked by the browser and we will not be able to access the response of the request.
So far, we have only discussed an example in which one server is making a request to another. For the example above, the notifications server is whitelisted to contact the play server on google.com. But, it would be unreasonable to whitelist all the origins by setting access-control-allow-origin to * unless it is a public server. Another common pattern, during development, is to run our UI application at localhost:$port , but whitelisting localhost to facilitate API calls is an anti-pattern and should be avoided.
Instead, we should use a Proxy server to deal with the restrictions imposed by the browser. The proxy server, in this case, takes the onus of handling our requests, responses, and making the modifications necessary to facilitate cross-origin communication. To understand some of the internal workings of a proxy server, let us take a look at the very famous NodeJS proxy library node-http-proxy.
On a very high level, when a request is initiated by an application which uses node-http-proxy, when set-up properly, performs two steps as seen from this excerpt from the documentation:
When a request is proxied it follows two different pipelines (available here) which apply transformations to both the _req_and res object. The first pipeline (incoming) is responsible for the creation and manipulation of the stream that connects your client to the target. The second pipeline (outgoing) is responsible for the creation and manipulation of the stream that, from your target, returns data to the client.
A deeper investigation reveals that the requests which are being sent out are captured and, based on the configuration provided, are overridden/modified here. The modification of the request path to the proxy path can be found here.
Similarly, for the incoming response, the responses are captured and modified mostly here.
Since we have a basic understanding of why we need a proxy server and how they work internally, we can now move on to understand how a React application uses a proxy server. Let us check out a few of the most common ways in which we handle HTTP requests in a React application.
Before examining the below use cases, create a new application using create-react-app which uses webpack-dev-server to start our development server. The webpack-dev-server optionally accepts a proxy object with the necessary structure as defined here. Let us break these steps down to further analyze how React applications using create-react-app handle proxying of requests.
const proxySetting = require(paths.appPackageJson).proxy;
First, it extracts the proxy configuration from package.json file.
const proxyConfig = prepareProxy(proxySetting, paths.appPublic);
thenprepareProxy method prepares the proxy configuration necessary by combining the proxySettings extracted in the previous step with some valid defaults based on the execution environment. The proxy config that is generated now, will be eventually passed down to node-http-proxy which is used by webpack-dev-server to proxy requests.
const serverConfig = createDevServerConfig(
proxyConfig,
urls.lanUrlForConfig
);
Next step is to create the webpack server configuration using the proxyConfig.
const devServer = new WebpackDevServer(compiler, serverConfig);
Finally, the devServer is set up using the serverConfig and the compiler.
In the below examples, we will examine the different ways of proxying our requests within a react application.
1. Using fetch
With a new application created using the create-react-app CLI we can jump straight into coding:
Without Proxy, the request is going to be rejected by google.com server and we see the rejection logged on the console.
<a href="https://medium.com/media/2a8d12e97adb081fd29a3943ac7641ea/href">https://medium.com/media/2a8d12e97adb081fd29a3943ac7641ea/href</a>
But, when we add the proxy entry to our package.json the request is successfully proxied and we can load the information returned by the server. In this case, it is a blob so we need a bit of extra processing to extract the text.
<a href="https://medium.com/media/986cc9e7ff9fa2d2789381ae7dfafdf6/href">https://medium.com/media/986cc9e7ff9fa2d2789381ae7dfafdf6/href</a>
Updated component:
<a href="https://medium.com/media/c9473ceb92f0c19cb267af5b4c93ed03/href">https://medium.com/media/c9473ceb92f0c19cb267af5b4c93ed03/href</a>
and the result is logged to the UI as follows:
2. Using custom targets for different paths
Since not all requests go to the same server, we can define paths and target for each path in our package.json file:
<a href="https://medium.com/media/b256d772198d2e373ca68250bc59ab6b/href">https://medium.com/media/b256d772198d2e373ca68250bc59ab6b/href</a>
In our component, while making requests, this configuration is applied and our requests are sent to the corresponding servers:
<a href="https://medium.com/media/890583b1abad55c61906fa96d8cc7c44/href">https://medium.com/media/890583b1abad55c61906fa96d8cc7c44/href</a>
The response is out of order because of the async nature of the blob and .json() method as seen below:
One peculiar thing to notice is that we used the changeOrigin flag in our package.json file. This flag changes the origin of the host header to the target URL thus enabling successful connection. There are other similar and helpful options available here.
3. Using axios
If your application uses axios instead of fetch for making http requests, setting up proxy is still no different than what we have done so far.
Let us add another path to get the posts from typicode using axios. Install axios using npm or yarn before proceeding to update the package.json file as shown below:
<a href="https://medium.com/media/50c1793c46b3698950e1a6c16142289d/href">https://medium.com/media/50c1793c46b3698950e1a6c16142289d/href</a>
The component can now be updated to make request with axios:
<a href="https://medium.com/media/b4e9ad26ee7519f581523699287e2118/href">https://medium.com/media/b4e9ad26ee7519f581523699287e2118/href</a>
The advantage of using axios is that we can now add additional options and features to our http requests with ease. One such feature is the use of interceptors to intercept requests and responses per application rather than per request.
Full code base for the example shown above can be found here.
If you enjoyed this blog be sure to give it a few claps, read more or follow me on LinkedIn.