The path forward to access a GRPC server directly via browser seems to be Google’s grpc-web project; though it uses Envoy Proxy internally.
Note that there is one more older way and that is also called grpc-web (https://github.com/improbable-eng/grpc-web). But this one is from GRPC /Google itself — https://github.com/grpc/grpc-web.This is pretty exciting! From their site
“gRPC-Web provides a Javascript client library that lets browser clients access a gRPC server ..The JS client library has been used for some time by Google and Alphabet projects with the Closure compiler and its TypeScript generator (which has not yet been open-sourced).
gRPC-Web clients connect to gRPC servers via a special gateway proxy: our provided version uses Envoy, in which gRPC-Web support is built-in. Envoy will become the default gateway for gRPC-Web by GA.
The current release is a Beta release, and we expect to announce General-Available by Oct. 2018.”
GRPC is clearly a neat technology when it comes to MS. Typed versioned interfaces is one hidden requirement if you want to get Micro Services or SOA right. One question that comes always or what developers (including me) miss is the accessibility of the GRPC API when compared with REST API. It is not so hard to fire up a simple Python client or Node.JS TypeScript client to access the API’s. But if it is as accessible as REST API’s are from a browser it would be great.
The tutorial (https://github.com/grpc/grpc-web/blob/master/net/grpc/gateway/examples/echo/tutorial.md) is good enough; but I stumbled a few times. Hope this helps and is based on the above tutorial + examples and some partial work I did for proxying via Envoy.
Here is what we are going to do
(thanks to LucidChart drawing SW)
Step 1:
First start your GRPC server (any language) at some port, say 17007 . Here is a sample protobuffer file / proto file (I have masked out parts, you can use any proper proto file)
syntax = "proto3";
package xxx.yyy_service;
service SomeService {
rpc testMe(Parameters) returns (Response);//will work with server side streams alsorpc testMeStream(Parameters) returns (stream Response);
}
message Response {int32 response = 1;};
message FileDetails {string fileUrl = 1;string outputfolder = 2;};
message Parameters {FileDetails file_details = 1;};
Step 2 Envoy Proxy Bridge
You need to start an Envoy Proxy to bridge GRPC( HTTP/2) with Browser Communication (HTTP/1.1) . Envoy from Lyft (ride share company) is the backbone of other Kubernetes related Service Mesh like Istio, maybe due to the fact of it’s support for GRPC from the beginning.
Here is a sample Envoy proxy configuration envoy.yaml and the associated Dockerfile to build Envoy with this configuration .
Two parts of this are interesting. The HTTP listener listening at 8080 and moving traffic to echo_service -our GRPC server at local IP and 17007 port.
With these two let us build an Envoy Docker image ( run the below command in same folder where you have the Dockerfile and envoy.yaml; change the ports as you wish)
Build envoy
sudo -E docker build -t envoy:v1 .
Step 3: Run Envoy proxy.
sudo docker run -p 8080:8080 --net=host envoy:v1
I am running this in the host network (-p has no meaning now) as within the Envoy image the port 17007 is inaccessible. Better would be to run your GRPC server within the Envoy Docker or vice versa; or create a NW layer (say in Kuberenetes ) where the ports are accessible.
Now we have GRPC server running in 17007 and Envoy running intercepting GRPC traffic (HTTP 2) at 8080 and directing it to 1700 as HTTP 1.1. But this conversion has to follow the GRPC framing rules. (feel free to skip to Step 4 as below is not interesting generally)
When a request is sent, the filter sees if the connection is HTTP/1.1 and the request content type is application/grpc.
If so, when the response is received, the filter buffers it and waits for trailers and then checks the grpc-status code. If it is not zero, the filter switches the HTTP response code to 503. It also copies the grpc-status and grpc-message trailers into the response headers so that the client can look at them if it wishes.
The client should send HTTP/1.1 requests that translate to the following psuedo headers::method: POST:path: <gRPC-METHOD-NAME>content-type: application/grpc
The body should be the serialized grpc body which is:1 byte of zero (not compressed).network order 4 bytes of proto message length.serialized proto message.
Because this scheme must buffer the response to look for the grpc-status trailer it will only work with unary gRPC APIs.
Via PostMan you can try the following
In the GRPC server I managed to get the redirect form Envoy
E0724 10:01:51.297989923 1685 http_server_filter.cc:241] GET request without QUERYE0724 10:02:13.139528491 1685 b64.cc:168] Invalid padding detected.E0724 10:02:21.578859033 1685 b64.cc:168] Invalid padding detected.E0724 10:04:44.931591481 1685 b64.cc:168] Invalid padding detected.
This is because I have not followed the rules of padding .You can try this simple Python client to do the body padding
Step 4: Generate JS Client Stubs
Generate the client Stubs. For this you need just protoc
protoc -I=../Interfaces/xxx/ --js_out=import_style=closure,binary:./build ../Interfaces/xxx/yyy.proto
This will generate a slew of .js files that has the message definitions. In our case filedetails.js,parameters.js,response.js . Place all of this in one folder.
Step 5
Now we need to generate the GRPC service stub.For this you need to first make grpc-web’s protoc-gen-grpc-web binary .
Clone grpc-web repository
cd to grpc-web/javascript/net/grpc/web/make# this will generate protoc-gen-grpc-web executable
Note the path of protoc-gen-grpc-web binary. Use this to generate the GRPC realted stubs
protoc -I=../Interfaces/xxx/ --plugin=protoc-gen-grpc-web=/home/alex/coding/grpc-web-google/grpc-web/javascript/net/grpc/web/protoc-gen-grpc-web --grpc-web_out=out=client.grpc.pb.js,mode=grpcwebtext:./build ../Interfaces/xxx/yyy.proto
Note the mode name
Step 6. Merge all JS files together
Combine all the generated *.js file to a single one using the Google’s closure compiler. (In the tutorial another method is followed). I got the closure method from this document — https://github.com/grpc/grpc-web/blob/master/INSTALL.md
$ wget http://dl.google.com/closure-compiler/compiler-latest.zip -O compiler-latest.zip$ unzip -p -qq -o compiler-latest.zip *.jar > closure-compiler.jar
Move the .jar to the root of grpc-web project.
java -jar ./closure-compiler.jar --js ./javascript --js ./net --js /home/alex/coding/closure-library --js /home/alex/coding/protobuf/js --js /home/alex/coding/xxx/js-generated/*.js --entry_point=goog:xxx.yyy_service.SomeServiceClient --dependency_mode=STRICT --js_output_file compiled.js
Note-The sub-modules inside third-party were empty after checkout of grpc-web and I had to clone the closure-library and already had the protobuf library cloned for building grpc. The rest of the packages are in root of grpc-web.
This will give us one compiled JavaScript file. Using this we can write a HTML file to use it.
Step 7 : Write the JS code and serve via NGINX
Notice how we included the compiled.js file and also how the request is framed.
If we serve this via opening in a browser locally, you may run into CORS problem. So given is a bare-bones nginx.conf to server this html
Step 7
Run the nginx server with the nginx.conf file . It starts a web-server on port 5000
sudo docker run --rm --net=host -v /home/alex/coding/xx/js-generated/compiled/:/usr/share/nginx/html -v /home/alex/coding/xx/js-generated/compiled/nginx.conf:/etc/nginx/nginx.conf:ro nginx
That’s it. Use a browser go to http://localhost:5000 , I tested with Chrome, use the developer view (F12 / Cntrl-Shft-C in Chrome ) to see the console logs, as the script is executed while the page is loaded.
References