If you follow the Microsoft development community at all, you’ve most likely already heard of the new web development framework called Blazor. If you haven’t heard of it, here’s an overview from the product site:
Blazor lets you build interactive web UIs using C# instead of JavaScript. Blazor apps are composed of reusable web UI components implemented using C#, HTML, and CSS. Both client and server code is written in C#, allowing you to share code and libraries.
As a developer who’s spent most of their career working on the back-end side of .NET applications writing C# code, this is certainly appealing. I’ve always wanted to build a personal website for myself, so I took the time to do so using Blazor and found that it was very simple and the code needed is rather minimal. This post will describe, at a high-level, what you need to build out a similar application.
The source code being referenced throughout can be found in this GitHub repository. Note that, as of writing this, the project is still in development.
As you may already know, Blazor is built on top of the open-source .NET Core SDK/runtime. So I’d suggest making sure you have the latest versions of:
With those, we can now create the project. When creating a new project in Visual Studio, you should now have an option for a Blazor App:
After selecting this and providing the name and location of the project, you should be presented with the following options:
These options represent the two hosting models available in Blazor. We’re going to be using the WebAssembly model, since this will allow us to serve the application (yes, the entire .NET/C# application) in the browser statically. Note that Blazor WebAssembly is currently in preview (as of April 17, 2020).
Upon creating the project, a default Blazor application will be generated for you. You should be able to build/run this and see a page similar to the following:
The Blazor framework is relatively simple, so this shouldn’t be too complicated. Essentially, the application contains the following:
.razor
file extension) — these define the route and layout of a given page in the UI. These pages can contain HTML elements and C# code. In the following example, you can see the C# code defined using the @code
block:@page "/counter"
<h1>Counter</h1>
<p>Current count: @currentCount</p>
<button class="btn btn-primary" @onclick="IncrementCount">Click me</button>
@code {
private int currentCount = 0;
private void IncrementCount()
{
currentCount++;
}
}
@namespace Tayco.Web.Components
@inherits PageHeaderBase
<MatH2>@Title</MatH2>
<MatDivider />
<br />
using Microsoft.AspNetCore.Components;
namespace Tayco.Web.Components
{
public class PageHeaderBase : ComponentBase
{
[Parameter]
public string Title { get; set; }
}
}
Given that, you can now clean up the default pages generated by the template (
Pages\Counter.razor
and Pages\FetchData.razor
) and add your own. I currently only have two pages, an About Me page and a Blogs page which I’ll be updating later on to contain the blogs I’ve published.If you happen to follow along and create a similar site, here are some other points to consider that I came across while developing this:
One of the benefits of developing software using cloud services is the ease of use. The major providers (namely Microsoft, Amazon, and Google) all give a wide array of offerings, allowing robust and complex solutions.
Since our site is going to be static, it should be cheap and simple to maintain. We don't need any sort of server-side compute, we just need a place to host our files and provide public access to them. AWS has great documentation on hosting a static website, so I mainly followed that to set everything up.
Here's an overview of the services that will be needed:
This diagram from the AWS documentation gives a great outline of how these services interact:
Aside from the low maintenance and ease of use, another benefit to be gained from this approach is the low cost. I ended up using purchasing a domain with Route 53, which has been the most expensive part of the website at around $12.
The billing model of these services are base off of your usage. So there's no large upfront fees or subscription costs. For most cases, static sites will cost somewhere between pennies to a few dollars per month.
As mentioned earlier, the AWS documentation for hosting a static site is pretty thorough, so I'd recommend following along with that. The whole process is easy to follow and took me less than 30 minutes.
The only part that I'll give some guidance to is the portion of uploading the actual website content. Once you have the S3 bucket for you root domain created, you can then add the contents of your Blazor app.
To do so, you'll first need to create the publishable contents of your app using one of the following methods:
dotnet publish -c Release -o publish Tayco.sln
. I'd recommend getting familiarized with the CLI if you're not already. We'll be using this in a later post in setting up a Continuous Deployment pipeline.Once your app has been published, you simply need to upload the contents into the S3 root domain bucket. Whichever method above you choose to publish with, you should ultimately end up with a directory containing your
index.html
. This is the directory that you'll want to copy into your S3 bucket. This folder will contain everything in your
wwwroot
folder, along with the necessary DLLs and Blazor files.As mentioned previously, one important thing to consider here is that anything your copy to this S3 bucket will be publicly accessible. If this is a concern for you, then this solution is probably not appropriate. You may want to consider a solution where the private contents can be contained on a private server.
At this point, we've successfully published a Blazor WebAssembly app as a static website in AWS. This process essentially boiled down to the following:
Now our website is live which is great. However, we're probably going to be making changes to the website over time - as with almost any piece of software. We could just repeat the process we've followed above, manually building and copying the contents into production every time we make a change.
Since I'll likely be the sole contributor to my website and changes will be relatively infrequent, this would probably be manageable (even though I'd cringe every time). There are a few factors in software delivery that when increased make this process quickly unmanageable:
This problem is generally solved under the umbrella of Continuous Integration and Continuous Deployment (i.e. CI/CD), which entails automating the build/deployment steps that we would otherwise do manually. There is an ever growing list of tools that can be used to help implement CI/CD.
We'll be using GitHub Actions to automate the process of building our Blazor WASM app and deploying it to AWS.
I decided to go with GitHub Actions in this project for a few reasons:
As mentioned previously, we have a few manual steps that we can follow to get our website into production. To translate that into GitHub Actions, we'll need to create a workflow. The idea of a workflow is pretty standard across different CI/CD implementations. At it's core, it's just a definition of your build/deployment process. That definition is generally stored in a YAML file.
With GitHub Actions, to create a workflow you simply add the definition as a .yml file in the
/.github/workflows
directory in your repository. Below is the current workflow definition for uploading my website (don't worry, we'll break down the pieces of this next):
name: Upload Website
on:
workflow_dispatch:
inputs:
input_name:
required: false
default: "Upload Website - Manual Trigger"
push:
branches:
- master
paths:
- 'src/**'
jobs:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout@master
- uses: actions/setup-dotnet@v1
with:
dotnet-version: '3.1.300'
- run: dotnet build -c Release
- run: dotnet test -c Release --no-build
- run: dotnet publish -c Release --no-build -o publish Tayco.sln
- uses: actions/upload-artifact@v1
with:
name: dist
path: publish/wwwroot
deploy:
needs: [build]
runs-on: ubuntu-latest
steps:
- uses: actions/download-artifact@v1
with:
name: dist
- uses: jakejarvis/s3-sync-action@master
with:
args: --acl public-read --follow-symlinks --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
SOURCE_DIR: 'dist/'
As you can see, there are a number of grouped sections in this file. Let's break it down to understand what's going on.
on:
workflow_dispatch:
inputs:
input_name:
required: false
default: "Upload Website - Manual Trigger"
push:
branches:
- master
paths:
- 'src/**'
The
on
keyword is used to define an event that the workflow will be triggered by. In our case, this workflow can be triggered by two different events:workflow_dispatch
- This is a recent addition that allows the workflow to be manually queued through the UI. Prior to this you would have to trigger a different event (e.g. push a commit) in order to run the workflow.push
- As you might guess, this event is any push to the repository given a set of conditions. Our conditions are any push to the master
branch with changes in the src/
directory.The
jobs
keyword defines what the workflow will actually do in response the events defined above. Each job generally specifies:It's also important to note that jobs will run in parallel by default. So if you have dependencies, you'll need to explicitly declare those using needs.
Our workflow consists of two rather straightforward jobs:
build
and deploy
.build:
runs-on: windows-latest
steps:
- uses: actions/checkout@master
- uses: actions/setup-dotnet@v1
with:
dotnet-version: '3.1.300'
- run: dotnet build -c Release
- run: dotnet test -c Release --no-build
- run: dotnet publish -c Release --no-build -o publish Tayco.sln
- uses: actions/upload-artifact@v1
with:
name: dist
path: publish/wwwroot
So walking through the steps for this job, we have:
master
branch. (Remember that this job runs after any push to master
in the src/
directory.)dotnet
. This a community Action that can be found in the marketplace (see here).publish
directory.deploy:
needs: [build]
runs-on: ubuntu-latest
steps:
- uses: actions/download-artifact@v1
with:
name: dist
- uses: jakejarvis/s3-sync-action@master
with:
args: --acl public-read --follow-symlinks --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
SOURCE_DIR: 'dist/'
Before jumping into the steps, you might notice that we've declared
needs: [build]
. As you can imagine, this ensures that this job will run in sequence after the build
job finishes successfully.And for the steps, we have:
publish
artifact that we uploaded in the build
job.There's another important aspect to look at here. Our workflow is now interacting with external infrastructure - AWS S3 in this case. Thankfully we can't make changes to our S3 bucket without telling it who we are, so we need to supply some information. However, that information is confidential and should not be shared. And since the workflow files are visible to anyone with access to the repository, we don't want to spill the beans in our code. GitHub has a solution to this problem, and that is with secrets.
Secrets act as a secure key-value store that can be set once and then used throughout your actions. As you can see in the
deploy
job, we use the ${{ secrets.<SECRET_NAME> }}
syntax to access secrets that we've defined in the repository.And with that, we've now automated the build and deployment steps that we previously had to do manually. Now if we want to make a change, we just do the development locally and then push the changes to the server. From there, our Upload Website workflow will be triggered and kick off the necessary jobs to build and deploy our app.
I've also setup a couple of other workflows in the repository:
master
branch. It just builds and runs the tests for the changes, ensuring that things are mostly working before merging into master
(which will then trigger the Upload Website workflow)./blogs/
directory on the master
branch. It simply copies the contents into the S3 bucket that I use to serve the actual blog content to the application at runtime.While my website isn’t the most elegant (I certainly don’t claim to be an expert in UI/UX), I found the whole process of working with Blazor to be very enjoyable and I’m ultimately happy with my result.
Going forward, if I find myself repeating any tasks manually I can look into adding them into our workflows. As mentioned earlier, I could probably maintain this repository without the automated workflows. The value of implementing these CI/CD practices increases as the frequency of changes and number of contributors increases, making it essential for effective software delivery at scale.
Previously published at http://tay-co.com/