The path forward to access a GRPC server directly via browser seems to be Google’s project; though it uses Envoy Proxy internally. grpc-web Note that there is one and that is also called grpc-web ( ). But this one is from GRPC /Google itself — .This is pretty exciting! From their site more older way https://github.com/improbable-eng/grpc-web https://github.com/grpc/grpc-web “gRPC-Web provides a Javascript client library that lets browser clients access a gRPC server ..The JS client library has been used for some time by Google and Alphabet projects with the Closure compiler and its TypeScript generator (which has not yet been open-sourced). gRPC-Web clients connect to gRPC servers via a special gateway proxy: our provided version uses Envoy , in which gRPC-Web support is built-in. Envoy will become the default gateway for gRPC-Web by GA. The current release is a Beta release, and we expect to announce General-Available by Oct. 2018.” GRPC is clearly a neat technology when it comes to MS. . One question that comes always or what developers (including me) miss is the accessibility of the GRPC API when compared with REST API. It is not so hard to fire up a simple Python client or Node.JS TypeScript client to access the API’s. But if it is as accessible as REST API’s are from a browser it would be great. Typed versioned interfaces is one hidden requirement if you want to get Micro Services or SOA right The tutorial ( ) is good enough; but I stumbled a few times. Hope this helps and is based on the above tutorial + examples and some partial work I did for proxying via Envoy. https://github.com/grpc/grpc-web/blob/master/net/grpc/gateway/examples/echo/tutorial.md Here is what we are going to do (thanks to LucidChart drawing SW) Step 1: First start your GRPC server (any language) at some port, say Here is a sample protobuffer file / proto file (I have masked out parts, you can use any proper proto file) 17007 . syntax = "proto3"; package xxx.yyy_service; service SomeService { rpc testMe(Parameters) returns (Response);//will work with server side streams alsorpc testMeStream(Parameters) returns (stream Response); } message Response {int32 response = 1;}; message FileDetails {string fileUrl = 1;string outputfolder = 2;}; message Parameters {FileDetails file_details = 1;}; Step 2 Envoy Proxy Bridge You need to start an Envoy Proxy to bridge GRPC( HTTP/2) with Browser Communication (HTTP/1.1) . E from Lyft (ride share company) is the backbone of other Kubernetes related Service Mesh like , maybe due to the fact of it’s support for GRPC from the beginning. nvoy Istio Here is a sample Envoy proxy configuration and the associated to build Envoy with this configuration . envoy.yaml Dockerfile Two parts of this are interesting. The HTTP listener listening at and moving traffic to echo_service -our GRPC server at local IP and port. 8080 17007 With these two let us build an Envoy Docker image ( run the below command in same folder where you have the Dockerfile and envoy.yaml; change the ports as you wish) Build envoy sudo -E docker build -t envoy:v1 . : Run Envoy proxy. Step 3 sudo docker run -p 8080:8080 --net=host envoy:v1 I am running this in the host network (-p has no meaning now) as within the Envoy image the port 17007 is inaccessible. Better would be to run your GRPC server within the Envoy Docker or vice versa; or create a NW layer (say in Kuberenetes ) where the ports are accessible. Now we have GRPC server running in 17007 and Envoy running intercepting GRPC traffic (HTTP 2) at 8080 and directing it to 1700 as HTTP 1.1. But this conversion has to follow the GRPC framing rules. (feel free to skip to as below is not interesting generally) Step 4 e https://www.envoyproxy.io/docs/envoy/v1.5.0/configuration/http_filters/grpc_http1_bridge_filter#config-http-filters-grpc-bridg When a request is sent, the filter sees if the connection is HTTP/1.1 and the request content type is application/grpc. If so, when the response is received, the filter buffers it and waits for trailers and then checks the grpc-status code. If it is not zero, the filter switches the HTTP response code to 503. It also copies the grpc-status and grpc-message trailers into the response headers so that the client can look at them if it wishes. The client should send HTTP/1.1 requests that translate to the following psuedo headers::method: POST:path: <gRPC-METHOD-NAME>content-type: application/grpc The body should be the serialized grpc body which is:1 byte of zero (not compressed).network order 4 bytes of proto message length.serialized proto message. Because this scheme must buffer the response to look for the grpc-status trailer it will only work with unary gRPC APIs. Via PostMan you can try the following In the GRPC server I managed to get the redirect form Envoy E0724 10:01:51.297989923 1685 http_server_filter.cc:241] GET request without QUERYE0724 10:02:13.139528491 1685 b64.cc:168] Invalid padding detected.E0724 10:02:21.578859033 1685 b64.cc:168] Invalid padding detected.E0724 10:04:44.931591481 1685 b64.cc:168] Invalid padding detected. This is because I have not followed the rules of padding .You can try this simple Python client to do the body padding Step 4: Generate JS Client Stubs Generate the client Stubs. For this you need just protoc protoc -I=../Interfaces/xxx/ --js_out=import_style=closure,binary:./build ../Interfaces/xxx/yyy.proto This will generate a slew of .js files that has the message definitions. In our case . Place all of this in one folder. filedetails.js,parameters.js,response.js Step 5 Now we need to generate the GRPC service stub.For this you need to first grpc-web’s binary . make protoc-gen-grpc-web Clone repository grpc-web cd to grpc-web/javascript/net/grpc/web/make# this will generate protoc-gen-grpc-web executable Note the path of binary. Use this to generate the GRPC realted stubs protoc-gen-grpc-web protoc -I=../Interfaces/xxx/ --plugin=protoc-gen-grpc-web=/home/alex/coding/grpc-web-google/grpc-web/javascript/net/grpc/web/ --grpc-web_out=out= ,mode= :./build ../Interfaces/xxx/yyy.proto protoc-gen-grpc-web client.grpc.pb.js grpcwebtext Note the mode name Step 6. Merge all JS files together Combine all the generated file to a single one using the Google’s . (In the tutorial another method is ). I got the closure method from this document — *.js closure compiler followed https://github.com/grpc/grpc-web/blob/master/INSTALL.md $ wget -O compiler-latest.zip$ unzip -p -qq -o compiler-latest.zip *.jar > closure-compiler.jar http://dl.google.com/closure-compiler/compiler-latest.zip Move the .jar to the root of grpc-web project. java -jar ./closure-compiler.jar --js ./javascript --js ./net --js /home/alex/coding/ --js /home/alex/coding/ /js --js /home/alex/coding/xxx/js-generated/*.js --entry_point=goog:xxx.yyy_service.SomeServiceClient --dependency_mode=STRICT --js_output_file compiled.js closure-library protobuf Note-The sub-modules inside third-party were empty after checkout of grpc-web and I had to clone the and already had the protobuf library cloned for building grpc. The rest of the packages are in root of . closure-library grpc-web This will give us one compiled JavaScript file. Using this we can write a HTML file to use it. Step 7 : Write the JS code and serve via NGINX Notice how we included the file and also how the request is framed. compiled.js If we serve this via opening in a browser locally, you may run into CORS problem. So given is a bare-bones to server this html nginx.conf Step 7 Run the nginx server with the nginx.conf file . It starts a web-server on port 5000 sudo docker run --rm --net=host -v /home/alex/coding/xx/js-generated/compiled/:/usr/share/nginx/html -v /home/alex/coding/xx/js-generated/compiled/nginx.conf:/etc/nginx/nginx.conf:ro nginx That’s it. Use a browser go to , I tested with Chrome, use the developer view (F12 / Cntrl-Shft-C in Chrome ) to see the console logs, as the script is executed while the page is loaded. http://localhost:5000 References _For quite a long time, when Service Oriented Architecture (SOA) and WebService were the talk of the tech town, most of…_hackernoon.com REST is not the Best for Micro-Services GRPC and Docker makes a compelling case