After a long day trying to deploy my .NET Core microservice to IIS7(?), I finally persuaded my client to port our app to a linux server. In this post, I am going to create a simple React app that uses a .NET Core backend service, indeed, my goal is writing about tricky process of deployment. Here are the steps: Creating a sample React app & Getting the prod ready files Creating a sample .NET Core app Connect and send them to a Linux machine Installing Nginx & configuration — Deployment for client Supervisor installing & configuration — Deployment for server Creating a sample React application For creating a boilerplate code, I am going to use create-react-app, which is an official cli for react applications. I had zero problem with it. you can find further information Here Install it: $ npm install -g create-react-app 2. Create your project: $ create-react-app sample-react-app 3. Check that everything is ok. Application will be running at port 3000. $ cd sample-react-app$ npm start 4. Let’s prepare the build files for the deployment $ npm run build Now you have a folder in your project folder that contains and Cool, we have a sample front end base now. We will send this file to the remote server. build asset-manifest.json favicon.ico index.html static. Creating a sample .NET Core application I can use Visual Studio 2015 to create our backend base, but I prefer using Yeoman for those who have Mac or Linux. Install Yeoman: $ npm install -g yo 2. Install ASP.NET Generator: $ npm install -g generator-aspnet 3. Create your .NET Core Microservice: Choose: * Web Application Basic [without Membership and Authorization] * BootstrapName it: * SampleMicroservice $ yo aspnet Go inside the created folder and change the Index method of HomeController: public IActionResult Index(){return Json(new {Hello = "From the api"});} 4. Go inside and add after Program.cs .UseUrls("http://0.0.0.0:5000") .UseKestrel() Run the service and try hitting and be sure that the json appears on screen: [http://localhost:5000](http://localhost:5000) Connect and send them to a Linux machine For a cheap Linux VPS, I prefer . For your personal projects, you can use Raspberry PI + a static IP as your own server. DigitalOcean To send what we created so far, you can use a version controlling system like Github and SVN or use SFTP protocol. Reminder for SFTP: sftp username@remote_hostname_or_IP # Basic Commands:pwd -> remote working dirls -> remote lslpwd -> your working dirlls -> ls for your machineget remoteFile localFile -> Get the file to the local machineput localFile -> Send the file to the remote server # simply put l as a prefix to the commands for running them in local machine Installing Nginx & configuration — Deployment for client So, we are in our Linux server, connected via and our folder extracted by is in our workspace. ssh build npm run build SSH Reminder: $ ssh user_name@IPThen enter your pass Install Nginx: $ sudo apt-get install nginx Edit the configuration file by vim or nano: vim -or nano- /etc/nginx/sites-available/default In this file default configuration appears: server {listen 80 default_server;listen [::]:80 default_server; # Some comments... root /var/www/html; **\# STATIC FILE LOCATION** # Some comments... index index.html index.htm index.nginx-debian.html; server\_name \_; location / { # Some comments... try\_files $uri /index.html; **\# ADD THIS** } # Some comments... } At first, you should move all the files in your folder to the static file location, which is as default. build /var/www/html $ cp -a /your/build/folder/location/build/* /var/www/html/ Before you copy the files, you should add inside in the nginx configuration as you don’t want to give 404 not found error to the user trying to directly go to the different url in your single page app. You simply say here that “Any routes should match with the , Let’s give the responsibility to the React Router.” try_files $uri /index.html; location / index.html After restarting nginx, your homepage should run on the : http://your_ip $ sudo service nginx restart Supervisor installing & configuration — Deployment for server We are again in our Linux server and is in our workspace. our dotnet core project folder I prefer explaining this step by step: 1. Create a user to run your future daemon -service-. And login with that user: $ adduser sampleMicroService...$ usermod -aG sudo sampleMicroService$ su sampleMicroService 2. Install dotnet by these steps. 3. Prepare your dll files: Go into your project folder and extract the deployment files: $ dotnet restore $ dotnet run (optional to check that your app runs successfully) $ dotnet publish folder should be in publish /bin/Debug/netcoreapp1.x/ 4. Move the all the files in folder to somewhere in publish /var $ mv /your/publish/folder/location/publish/* /var/SampleMicroservice/ Note: The reason we are compiling and publishing our dll’s in our deployment server instead of our local machine is that we want folder to set it as an environment variable in supervisor configuration. Actually this is a bug in dotnet cli, if we don’t set an environment variable is thrown with message. is the bug details. /home/sampleMicroService/.nuget/packages ArgumentNullException path1 can't be null Here 5. Install supervisor to create your daemon: $ sudo apt-get install supervisor 6. Create supervisor configuration file by: $ sudo vim -or nano- /etc/supervisor/conf.d/sampleMicroService.conf And put this inside: [program:sampleMicroService]command=/usr/bin/dotnet /var/sampleMicroService/SampleMicroService.dlldirectory=/var/sampleMicroService/autostart=trueautorestart=truestderr_logfile=/var/log/sampleMicroService.err.logstdout_logfile=/var/log/sampleMicroService.out.logenvironment=ASPNETCORE_ENVIRONMENT=Production,NUGET_PACKAGES="/home/anil/.nuget/packages"user=sampleMicroServicestopsignal=INT Be sure that you set NUGET_PACKAGES environment variable here. Stop supervisor, enable systemctl and start supervisor: $ sudo service supervisor stop$ sudo systemctl enable supervisor$ sudo service supervisor start From now on you can start/stop your service by: $ sudo supervisorctl start/stop/status sampleMicroService Check the error or out logs: $ tail -f /var/log/sampleMicroService/sampleMicroService.err.log$ tail -f /var/log/sampleMicroService/sampleMicroService.out.log That’s it, your micro service is running on http://YOUR_IP:5000 Ps. You can easily add your domain name by clicking “Add domain” to droplet menu if you use digitalocean like me. Thanks, Sometimes I tweet useful piece of information: @_skynyrd Btw, my references: Official documentation Hanselman’s post Cameronbwhite90’s post Stackoverflow