Photo by on Timo Wagner Unsplash If you are one of the strong readers who made it in part 2, welcome back! If you are not, I urge you to read part 1 and 2 with care, and get back when done. _Today we are going to deploy the contracts and throw some colours on the screen as we design and a static web site to interact…_hackernoon.com Dip Dapp Doe — Anatomy of an Ethereum distributed fair game (part 2) Today we are going to follow the Test Driven Development methodology on our frontend, along with Web3. We are also going to bundle the dapp and distribute it with IPFS. Keep your seat belts fastened, because we are approaching our destination! Picture by publicdomainphotos As a reminder, the source code of the article can be found on the GitHub repo: _dip-dapp-doe — Distributed app featured in a Medium article_github.com ledfusion/dip-dapp-doe Yet, Test Driven Development In the last article we dived into the architecture, design and the building blocks of our dapp’s frontend. For educational purposes, we even showed the integration of one of the blockchain transactions, but let’s not lose perspective. In TDD, we need to spec first and code later. There are very nice tools which allow to automate UI tests and even record them visually among different browsers. However, in dapp testing we are limited by two important caveats: **Only a few browsers support Web3**Browser support may extend with the release of new MetaMask plugins, but we are mainly pivoting around the Chrome engine and Gecko. **We can’t get programmatic access to control MetaMask/Web3**Allowing Javascript code to accept transactions would be a huge security flaw, because any web site could steal our funds at once. However, that is what we need to do in order to test our code. Ethereum The last issue would have been a for any serious project’s workflow. Until now. major drawback Dappeteer Puppeteer is an official package from Google that allows to programatically control a Chromium instance from NodeJS on Linux, Windows and MacOS. However how do we add the MetaMask plugin and tell it to accept transactions if the plugin runs out of our window? That’s where comes into play! It is another NPM package that features a embedded version of MetaMask, tells Puppeteer to run with the plugin enabled and provides some wrapper methods to import accounts, accept transactions and even switch to a different network. Dappeteer In our folder: web $ npm i -D puppeteer dappeteer Local blockchain If you recall, in part 1 we developed our smart contracts by deploying and testing them in a local . Test cases waiting for every public transaction to be mined would take ages to complete. blockchain However, in part 2 we demonstrated the integration to the public blockchain from the browser. What happens, now? How can we use a local blockchain so that transactions can be mined as fast as when using Truffle? The tool for this is Ganache CLI. It is another NPM package, which is part of the Truffle Framework and it is what we actually used under the hood in part 1. $ npm i -D ganacle-cli If you run it now, you should see something like this: Ganache CLI output As you see, it generates random wallets with 100 ether, but it can be fully customized. Now we can mine immediate transactions without polluting the public blockchain with junk. Workflow scripts I normal web projects, you may be used to working with Webpack started by a simple NPM script. However, in the current project we need to start combining different simultaneous components at the same time. What needs to happen when we run our E2E tests? Starting the Ganache local blockchain (in the background) Recompile the contracts Deploy them to the local blockchain Write the contract instance’s address so that the frontend knows where to attach to Bundle the frontend files with Parcel Start a local HTTP server for the static files (in the background, too) Launch Chromium+Dappeteer and run the tests Kill Ganache and the HTTP server Forward the exit() code of Mocha to the parent process, so that it can determine if all tests passed or not You are free to use any task runner that you like, but to me this clearly becomes a job for a shell script. To get the best of both worlds, I’d suggest you to use , along with a . . [runner-cli](https://www.npmjs.com/package/runner-cli) Taskfile More on this $ [sudo] npm i -g runner-cli Let’s create one: run --new? What template do you want to use?Gulp fileNPM packageMakefile❯ $ Shell script Now edit and add a function called with the following set of commands (commented in-line) taskfile test test { "Starting ganache"ganache-cli --mnemonic "$(cat ./dev/mnemonic.txt)" > /dev/null &ganache_pid= function echo $! ... Here we start the server in background (with the at the end) and retrieve the process PID by assigning into . Also note that reads the contents of the file and puts them as a Ganache parameter. With that, everyone can import the same account. & $! ganache_pid "$(cat ./dev/mnemonic.txt)" mnemonic.txt "Recompiling the contracts" ../blockchain./taskfile build ../web echo cd cd Here we go to the folder and run another script that launches to compile the contracts. Compilation can run concurrently with Ganache. contracts Solc "Deploying to ganache"node ./dev/local-deploy.js echo This script is quite similar to . Instead of deploying to the Ropsten network, it deploys them to Ganache. It also stores the instance address into (will see it later). blockchain/deploy/lib.js .env.test.local "Bundling the web with NODE_ENV=test"NODE_ENV=test parcel build -d ./build --log-level 2 --no-source-maps src/index.html &parcel_pid= echo $! Now that we know what address to attach to, we can tell Parcel to bundle from to with the appropriate environment variables in place. This can run in parallel with our next step: src build "Starting local web server"serve build -p 1234 &serve_pid= echo $! This will simply start an HTTP server, leave it on the background and get note of its PID. Run to add it to the project. npm install -D serve "Running the tests" $parcel_pidmocha ./test/frontend.spec.jsmocha_result= 1 echo wait $? sleep Here, we for the Parcel process to complete, and when it does, we finally start our Mocha test cases. We keep the exit code of Mocha by reading and a bit later we start to clean things up: wait $? "Stopping the servers" $ganache_pid $serve_pid $mocha_result} echo kill kill exit We kill the two background processes and finally exit with the status code returned by Mocha. Ta da! Environment data At the current point, if we run , we will start a dev server on port 1234 with a Web3 pointing to the Ropsten (test) network. But if we do , then we expect to have a web site that will connect to Ganache. How to achieve that without touching any code? parcel -d ./build src/index.html run test Parcel allows us to use files and map the lines into variables. Let’s create a couple of files for our environments. In : .env KEY=value process.env.* web/.env CONTRACT_ADDRESS=0xf42F14d2cE796fec7Cd8a2D575dDCe402F2f3F8FWEBSOCKET_WEB3_PROVIDER=wss://ropsten.infura.io/wsEXPECTED_NETWORK_ID=ropsten These are the environment variables that will be used by default. This is, when compiling the web, we will connect to the public Ropsten network, expect MetaMask to be on this network too and use the contract address where it is deployed. However, when we are testing, we want those variables to look like below in : web/.env.test.local CONTRACT_ADDRESS="--- LOCAL CONTRACT ADDRESS GOES HERE ---"WEBSOCKET_WEB3_PROVIDER=ws://localhost:8545/wsEXPECTED_NETWORK_ID=private When is set, Parcel will look for and inject those values instead of the default ones. So will evaluate to in testing and be otherwise. . NODE_ENV .env.$(NODE_ENV).local process.env.EXPECTED_NETWORK private ropsten More info here As we already mentioned, we need to replace the placeholder by the contract’s local address. The main difference with the deployment script we already wrote in is the following function: [web/dev/local-deploy.js](https://github.com/ledfusion/dip-dapp-doe/blob/master/web/dev/local-deploy.js) CONTRACT_ADDRESS blockchain/deploy/lib.js setContractAddressToEnv(contractAddress) { (!contractAddress) { Error("Invalid contract address")} filePath = path.resolve(__dirname, "..", ".env.test.local") function if throw new const **let** data = fs.readFileSync(filePath).toString() **const** line = /CONTRACT\_ADDRESS=\[^\\n\]+/ data = data.replace(line, \`CONTRACT\_ADDRESS=${contractAddress}\`) fs.writeFileSync(filePath, data) } Every time we , the file is updated, and there is no code to modify. run test .env.test.local What if I want to just develop on a version of the dapp using the local blockchain? Two versions of the task are available on the on GitHub. dev web folder’s taskfile will provide an environment identical to the one used to run the tests, but leaving the browser open for you run dev will simply run Parcel’s dev server and rely on Chrome/Firefox’s MetaMask as any user would do run dev ropsten Time for spec’s Create the file and copy the following content into it: web/test/frontend.spec.js Ready? Type and see the magic happen :) run test Everything we need is ready for us. To keep the article readable, we will not elaborate on every use case. Feel free the check the . spec file on GitHub What happens next? We could approach the specs by starting a game, switching to another account; accepting the game, switching account back again, etc. However, this could lead to overcomplex specs and check a behaviour that users will not experience like that. We’d rather focus in one player’s experience and make sure that all relevant use cases are checked. To simulate the actions of the opponent, we will launch the corresponding transactions from the NodeJS testing script. So the approach we will follow looks like: We tell Chromium to create a game We launch a tranasction from to accept the game from web/test/frontend.spec.js accounts[1] Chromium confirms We tell Chromium to mark one position We make a transaction from the opponent’s account to mark another position Repeat the process until we reach a draw We check that the cells have the appropriate state and that the game ends in draw So how would such use-case test look like? Writing UI specs like this can be slow at the beginning, but the effort pays off as soon as you have simulated 5 complete games in less than a minute. A few things to note: Some assertions need to be delayed a bit, so that the frontend receives events and UI components respond The amount of time to delay may vary, depending on the environment speed We have added HDWalletProvider to reuse the same mnemonic, get the second account available and let the opponent play from it We have created a couple of helper functions to encapsulate repetitive tests, and will probably add more as we test more use cases Given the following spec, we code the behaviour of the frontend accordingly. Let’s watch the movie of our test case playing against itself: Doesn’t it remind you to a well-known film? Coding and polishing After the first use case is tested, the slope doesn’t look steep anymore :) What’s left for us is to spec the remaining use cases, code the frontend accordingly and bundle the static web site. Using the building blocks explained in part 2, the rest of the frontend functionality can be developed without major issues. When our specs are ready and development is on the go, we see that it would be good to show the “Withdraw” button only when it hasn’t been done already. However, this means that we need to add a get operation to the smart contract. What does it mean for us at this point? Add a test case in blockchain/test/dipDappDoe.js Add the function to blockchain/contracts/DipDappDoe.sol Exec on the folder run test blockchain Add an assertios in web/test/frontend.spec.js Update web/src/game.js Exec run test Updates on the contract will immediately reflect on the frontend’s code, and automated testing will ensure that we broke nothing in about one minute. Distribution Bundling Once we are happy with specs, results and UI performance, it’s time to think of distributing our dapp to the world. The first step is to use Parcel to bundle it with production settings: build { "Recompiling the contracts" ../blockchain./taskfile build > /dev/null ../web function echo cd cd "Cleaning the build folder" ./build/* echo rm "Bundling the web site"NODE_ENV=production parcel build -d ./build --log-level 2 --no-source-maps src/index.html} echo Next, it is time to quick check that everything actually looks as expected, including the attachment to the Ropsten network: www {build function serve build -p 1234} Navigate to and see that everything is Okay. These are the static files of our dapp: http://localhost:1234/ At this point, we could simply upload these files to Netlify, Surge, S3 or whatever provider you like. Once our domain name pointed to the hosting IP address and the TLS certificate is ready, you should not worry about data integrity anymore, right? If nobody updates your git repo, your provider sticks to the SLA and , everything is fine. corrupt governments don’t censor your web site However, it is a bit inconsistent that our dapp uses a smart contract that runs on a while it remains accessible through a web site that a big fish could turn down. decentralized blockchain centralized IPFS This is one other main reasons why IPFS exists. stands for and it is conceived with the aim of making content freely and reliably accessible across the globe. It has many advantages and some drawback, but for educational purposes, we will go through one of the most popular decentralized filesystems. IPFS InterPlanetary FileSystem In a similar way to a blockchain, the IPFS network is made a lot of nodes around the world that nobody controls. They act as a global peer to peer network, in which files are invoked by their hash. You can think of it like a Git + BitTorrent architecture that also provides an HTTP gateway. Without further introduction, let’s jump into it. First, install the IPFS CLI: $ curl -O https://dist.ipfs.io/go-ipfs/v0.4.17/go-ipfs_v0.4.17_darwin-amd64.tar.gz $ tar xvfz go-ipfs_v0.4.17_darwin-amd64.tar.gz$ cd go-ipfs$ ./install.sh Let’s init our local repository: $ ipfs initinitializing IPFS node at /Users/jordi/.ipfsgenerating 2048-bit RSA keypair...donepeer identity: QmUMkM9Px3touHUaWjB5yKi1qRbVwA9zRk8gjAndzkAy9wto get started, enter: ipfs cat /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme What happens if we run the last line as a command? Hello and Welcome to IPFS! ██╗██████╗ ███████╗███████╗██║██╔══██╗██╔════╝██╔════╝██║██████╔╝█████╗ ███████╗██║██╔═══╝ ██╔══╝ ╚════██║██║██║ ██║ ███████║╚═╝╚═╝ ╚═╝ ╚══════╝ If you're seeing this, you have successfully installedIPFS and are now interfacing with the ipfs merkledag! -------------------------------------------------------| Warning: || This is alpha software. Use at your own discretion! || Much is missing or lacking polish. There are bugs. | Not yet secure. Read the security notes for more. Check out some of the other files in this directory: ./about./help./quick-start <-- usage examples./readme <-- this file./security-notes Several things happened: is the hash of an IPFS folder, and it depends on the contents of all of its files QmS4ustL54... If any of its files or subfolders varies, its hash will vary too resolves the IPFS folder and retrieves the hash of the file QmS4ustL54.../readme readme With the has of that file, contents are transferred across the net (at the time, locally) and printed to the screen At the time, we are only using IPFS as a local client. Any content that is already in our repository will resolve immediately. But if we don’t have it, its hash will be requested to the network, transferred and eventually cached in our local repository. If nobody uses it for a while, it may be garbage collected. How do we add our files and become content providers? Let’s run the following command and see what happens: IPFS has hashed our files, computed the hash of the root folder and added them to our local repository. If we its content locally, this is what we get: cat However, what happens if we add a simple space+line to ? index.html You guessed it, the hash of is radically different, and the hash of the root folder too. Any attempt to alter data integrity will always generate new hashes. index.html But how do we access these files from a web browser? IPFS provides an HTTP and HTTPS gateway. Any file or folder can be navigated to with a URL like: . However, if we try to access this URL with the hash of the root folder, our browser will keep waiting forever because no node has such content yet. https://ipfs.io/ipfs/<hash> Yes, we are not a node yet. To join the network and provide our content, we need to start IPFS as a daemon. Open a new terminal tab and leave it running: reachable $ ipfs daemon If we now visit , it may take a few seconds but it will load. But not quite: https://ipfs.io/ipfs/<hash> Everything has been fine when running on , but it turns out that ParcelJS is expecting the bundles to be available from the root folder of the server. But now we are under . localhost /ipfs/<hash> A little change in should make the difference: web/taskfile > build # ... NODE_ENV=production parcel build [...] src/index.html --public-url ./ And then again, rebuild and add to IPFS: Let’s copy the new hash and see what happens now: DipDappDoe served from the IPFS gateway After a bit of patience, our first request will finally complete and our dapp will be running! Subsequent requests will be much faster. What happens now? If we run we will get the following: ipfs pin ls IPFS allows nodes to files, so their content is never garbage collected on it. In our case, we have the two versions of our folder and the sample data created on . pin build ipfs init Note that the entries are files contained in other IPFS folders. They are pinned, only because another pinned element contains them. entries correspond to the explicitly pinned folders. indirect recursive Now, our daemon is running and our content is accessible, but what happens if we stop it? Any content that has not been accessed yet will remain unavailable. The files of the dapp we just visited will remain reachable for a few hours until the network nodes mark them as unused and clean them up. Unused content will continue to be stored and available as long as an active node keeps them pinned. IPNS Telling the world to connect to a different URL when the web site is updated, is not much convenient. Isn’t there any better? IPFS provides the (IPNS) mechanism. An IPNS hash acts as an alias to an IPFS hash, with the difference that it can be updated over time. An IPNS hash can only be updated from the account that created it, as it is signed with the user’s private key. Inter-Planetary Name System $ ipfs name publish QmbVfUBSHp42kYtDud9zr1pxedd4dgqDmAHuRqHRPKGywTPublished to QmUMkM9Px3touHUaWjB5yKi1qRbVwA9zRk8gjAndzkAy9w: /ipfs/QmbVfUBSHp42kYtDud9zr1pxedd4dgqDmAHuRqHRPKGywT From now on, the IPNF hash will resolve to the file In the browser, if we navigate to will be the same as navigating to QmUMkM9Px3... /ipfs/QmbVfUBSHp42... https://ipfs.io/ipns/QmUMkM9Px3... https://ipfs.io/ipfs/QmbVfUBSHp42... If at a later time, we need to update the frontend and use another IPNS hash, we will need to repeat the steps above with the new one. Existing users will continue to use the same URL. DNSLink However the IPNS approach still presents a few issues. IPNS URL’s are not user friendly, neither easy to remember Given an IPFS URL, users will not be able to verify that such URL is legitimate and belongs to us Using the ipfs.io domain, if the user was brought to a malicious web site also hosted on ipfs.io, it could expose certain dapps to XSS attacks or retrieve local data from unrestricted cookies IPNS hash resolution may be slow A more desirable scenario could be to use our domain name instead of the hash on IPNS. To that end, IPFS allows using DNS records to indicate what IPFS resource should be served. TXT If our domain was , we would add a record that should contain a string like: dapp.game TXT dnslink=/ipfs/<hash> When changes propagate through the net, the IPFS gateway will be able to fetch the TXT record of the given domain and use the underlying hash. Then our dapp should be available via . Easy to recognize, easy to check. [https://ipfs.io/ipns/dapp.game/](https://ipfs.io/ipns/dapp.game/.) But yet, as we still use the domain, the third issue above still remains. ipfs.io Custom domain To achieve the most user friendly approach, we would need the dapp to be accessible via , but then we will be in front of a tradeoff. TLS or IPFS. dapp.game For content to travel through TLS with our domain, we need to use our own server with the appropriate TLS certificate. The IPFS gateway has its own domain and certificate, and any other host name would be rejected by the browser. If the above is not an option, then requests to can be ‘d to but this will only work on HTTP. dapp.game CNAME gateway.ipfs.io **Own server**We could get a TLS certificate from LetsEncrypt, start a local IPFS node and use Nginx to proxy external requests to IPFS, but that defeats the advantages of using IPFS. Workload, security and data integrity depend on our centralized server, which becomes the bottleneck. Netlify, Firebase or Amazon are much stronger candidates than your own server to host the static site. It is true that the IPFS gateway could be considered as a central point as well, but it is backed by a decentralized network of nodes and has successfully overcome DDoS attacks and . censorship attempts Hosting the static files on our domain would mitigate potential XSS vulnerabilities, but it would expose our server to threats that IPFS has already handled in the past. . More info **IPFS HTTP gateway**On the other hand, DipDappDoe does not rely on external resources beyond the blockchain. XSS should not be an issue for DipDappDoe but communication through HTTP opens the door to DNS hijacking and Man in the Middle attacks. IPFS conclusion The final decision will depend very much on the way the dapp is built and what kind of users will interact with it. Using a IPNS URL like may be suitable if your dapp can not leak any information to any XSS attacker, does not load any dynamic content and your users don’t mind copying or typing slightly longer URLs. https://ipfs.io/ipns/dapp.game/ -ing our domain to the IPFS gateway should be avoided as this will only run on HTTP. CNAME Using your own backend to serve on HTTPS doesn’t take any advantage of IPFS, compared to serving local static files on its own. This approach would be suitable if third party content must be accessed from the dapp, if we can stand big fish attacks, or if the user base would not play well with a URL like the one above. Using one of the major hosting providers is the fallback for any of the above approaches. They will allow to use your own domain name, TLS certificates and will do their best to prevent potential DDoS and censorship attacks. But will be centralized. Global Summary As you have seen, writing a is far from simple. The list of technology involved is not short: simple dapp , Solidity Solc , Ethereum blockchain Infura , Truffle Ganache , Web3 Metamask , NodeJS Javascript , Mocha Chai , React Redux , CSS Antd , ParcelJS Dotenv , Puppeteer Dappeteer Shell scripts , IPFS DNS records , (if you run your own server) Nginx LetsEncrypt Photoshop or Sketch (if you design the frontend) DipDappDoe is an effort to cover the entire process of building a fully functional dapp with the minimum viable technology. In part 1 we learnt how to use the TDD methodology to develop the smart contracts of the dapp. In part 2 we saw how to deploy the contracts to a decentralized blockchain and we bootstrapped the architecture of the dapp’s frontend. In part 3 we have followed the TDD methodology again to develop the frontend of the dapp and have used a decentralized filesystem like IPFS to publish it to the world. Now that our MVP is ready, what might be next? Room for improvement Obviously, a blockchain version of the Tic-Tac-Toe will not be as exciting as a centralized real-time version. The core value of our version is provide a provably fair game powered by smart contracts that everybody can trust. Our main goal is to demonstrate the full stack of a distributed app and see how to use the building blocks at our disposal. If DipDappDoe was a real project, there would be many, many details to improve and work on at this point. The smart contracts would need to be audited by additional expert blockchain developers, beyond the developer himself Have the and source code automatically published so that they can be viewed and validated from places like Etherscan contracts metadata Make active use of and once these two Ethereum technologies become ready and steady Swarm Whisper Hire a dedicated graphics designer Implementing a much deeper check of the client environment, detecting Web3 compatibility and leading the user to get a fully operational browser Validate the UX/UI with private beta testers and ship an MVP close to what the app looks like today (if no issues arise) Iterate over the UI/UX improvements as the user base grows and the team has feedback about the dapp Disconnect from Web3 so that the frontend testing process can exit gracefully (when newer versions allow it) Explore if Dappeteer UI test cases could work in headless mode, so that CI/CD providers could run them The end. 🙂 Writing this series of articles has involved a big effort and countless hours of work. I’m honored to see that you made it to the end If you found the series of articles interesting and useful, please stand up, clap your hands 👏, smile 😊 like a hero and share with the same 💚 that I’ve put into this piece of work. Thank you! As said earlier, the project featured on the article can be found in the GitHub repo: _dip-dapp-doe — Distributed app featured in a Medium article_github.com ledfusion/dip-dapp-doe Stay tuned, because this is just the beginning of the above tech. Photo by on Timo Wagner Unsplash