paint-brush
7 Proven Practices to Boost Development Speed and Project Qualityby@dsitdikov
1,629 reads
1,629 reads

7 Proven Practices to Boost Development Speed and Project Quality

by Daniil SitdikovMarch 28th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

By adopting techniques such as backend response mocking, feature flags, error monitoring, performance optimization, code style standards, regression tests, and tracing, you can create more efficient and reliable software.
featured image - 7 Proven Practices to Boost Development Speed and Project Quality
Daniil Sitdikov HackerNoon profile picture

All of these points can be applied to mobile development, web frontend, and backend. I have gathered these practices from different teams and through the issues I faced over the last 6 years. These practices can be especially helpful when you build a project from scratch. Some of them may suit you perfectly, while others may not. If you have your own approaches and different experiences, I would be happy if you share them here. By the way, if you are a middle or junior developer seeking a promotion, implementing these practices in your team can really help. Let’s go!

1.Mocking backend responses until it’s ready

In a standard software development process, when a new feature request comes from the business, it is distributed among several teams: front-end, back-end, and mobile app development.


Then, each team proceeds with planning and task decomposition. But what if the back-end team requires significantly more time to develop their part? What if they can deliver endpoints only once a week?


The backend becomes a bottleneck.


The mobile and front-end development teams end up working like this: "Oh, the back-end has already implemented this. Let me take this task." Then, they take a break, switch their context to another feature, and the cycle continues. This leads to fatigue, decreased speed, and reduced quality.


Solution: agree on a contract with the back-end team and mock all requests.

In the classic approach, we have a gap between tasks. In the "mock approach", all work is performed as a flow


1. Coordinate with the back-end team on the endpoints and entities.


2A. Implement backend API with stub responses. The Faker library can help with sample data generation.


2B. Or implement stubs on the frontend. This can be an object with data directly in the code. For example, in Node.js, you can efficiently implement this using dynamic imports and avoid increasing the bundle size:

getUser() {
    return import('../../assets/mocks/users')
      .then(data => data.userById)
      .then(deserializeUser);
  };

This also can be a mock HTTP service that fetches JSON files from assets instead of making real requests.


  1. Hide the feature behind a feature flag.


  2. When the backend is ready, switch to the actual API if you used the front-end stubs approach, and verify that everything works as expected. And turn this feature on.

2.Feature flag

Now, as you probably noticed, in the previous section, I mentioned feature flags. In a nutshell, feature flags a.k.a feature toggles allow developers to turn features on or off in a live environment. There are also a couple of cases when they are useful: rolling out new features gradually, performing A/B testing, enabling beta features, and implementing hotfixes.


We use Gitlab for storing feature flags. It’s a dedicated repository that is consumed by both backend and frontend projects. The great news is that it has a user-friendly UI, thus product managers can manage features by themselves. Previously, we used to use feature flags for each project repository separately. However, this approach didn’t provide the ability to disable features for the whole product at once. So we move everything to the single repository.


In the code, it looks quite simple:

  1. In the project, we fetch all active feature flags. As under the hood, Gitlab is based on Unleash (feature toggle service), we use its official client.
  2. And then, just put if features.YOUR_FEATURE in the code which needs to be hidden.
  3. You can expand the use cases by adding different values in the feature flag. For instance, by adding the color value or the discount value.


3.Monitoring errors for tracking issues in a production environment

When our product transitioned from the MVP stage to a production application, we were concerned that users would get errors that we couldn't reproduce and might not even be aware of. After researching error-tracking tools, we settled on Sentry. The experience was positive. And now, let’s go through some important nuances.

Useless Errors

Under the hood, any uncaught exception will be tracked. As the application and the number of users grow, the number of errors can become so overwhelming that it is getting nearly impossible to notice anything truly important. Sentry can turn into a dumpster if you don't filter out the unnecessary stuff. For example, events like canceled requests, connection errors, and errors from connected scripts are utterly useless and will only spam your work email with notifications. As a solution, you can add filters to the configuration. To do this, simply define a beforeSend callback and put it in your sentryPackage.init. In this callback, you can analyze each caught error and then discard it (by returning null) if it's useless. Here's an example of a filter that excludes unnecessary errors:

function beforeSend(event, hint) {
  const error = hint.originalException;

  const externalScripts = [
    'gtm.js', // Google Tag Manager
    'watch.js', // X Analytics
  ].join('|');

  const errorsToIgnore = [
    AxiosError.ERR_NETWORK, 
    AxiosError.ECONNABORTED, 
    AxiosError.ETIMEDOUT
  ];

  if (axios.isCancel(error) 
      || errorsToIgnore.includes(error.code) 
      || error.stack?.match(externalScripts)) {
    return null;
  }

  return event;
}


Include more data for better debugging

By default, Sentry might not include the content of the request and response in the error report. Without this information, proper debugging is impossible. Fortunately, in the beforeSend handler, we can include this information.

function beforeSend(event, hint) {
  const error = hint.originalException;
  if (error.isAxiosError) {
      const url = error.request?.responseURL;
      const response = error.response?.data;
      const request = error.config?.data;

      event.extra = { 
        ...(event.extra || {}), 
        url, 
        response, 
        request 
      };
   }

   return event;
}

Filter Sensitive Information

Data such as passwords, email addresses, and keys should not be included in the error content. Sentry has a built-in mechanism for hiding this type of information. You can configure it in the security settings. Moreover, you can also remove something in the event object in beforeSend

Standalone Solution

If the nature of your business prohibits storing this kind of data on a server somewhere else, Sentry offers the ability to use it on your own servers.

4.Tracing

The path of trace ID

Imagine a situation where you successfully capture an error in Sentry, but the information in the description is insufficient. You turn to logs, but how can you identify the specific error among thousands of requests and even more log lines per second? How can you distinguish the correct ones, construct the request chain, and pinpoint the exact error, especially when your business has multiple teams and integrates with other services? This is where tracing comes into play.


  1. Tracing provides a complete diagram of invocations and identifies the precise method that failed, even when you have asynchronous communication performed by a message broker.
  2. It allows you to easily determine which side the error occurred on when integrating with different teams.
  3. Tracing is also useful for performance debugging. For instance, it can help clarify whether rendering takes longer or if a method in a microservice is not optimized enough.


In our specific implementation, we used Jaeger, which is based on the OpenTracing API.


In a nutshell, each request and all its method calls are tagged with a unique label. Each label has a reference to its parent and some metadata. The structure of this number depends on the implementation but as for OpenTracing you can read how it works and get familiar with terms like span, reference, child, parent, and so on the official repository page. In the real life, tracing luckily will rarely be used. However, in these rare accidents, it can save you time.

5.Performance optimization

When we implemented the MVP of the fintech app, we had a quite complicated form. At that time, I was still young and inexperienced. And eventually, we realized that our project was slowing down. We had to spend additional hours figuring out the reason. We had many unnecessary re-renders because we ignored basic rules related to props in React. I wanted to do everything possible to avoid such situations in the future.


So, I added to the project linters like this and an additional starting configuration to package.json to run why-did-you-render. In short, this plugin issues a warning if something is re-rendered unnecessarily and suggests how to avoid it. Also, we included running Lighthouse in headless mode. Some people say that premature optimizations are bad, but for me, it's a principle: do it right from the start.

6.Defined code style for all team projects

You've likely heard of the broken windows theory. If there's one broken window in a building and no one replaces it, eventually there won't be a single intact window left in that building.

The fewer rules and controls there are in a project, the greater the temptation to write low-quality code or to write it in an entirely different style. Inconsistent code increases the time it takes to understand it, while clear, familiar, and concise code allows for quick reading. In one of our teams, we described the coding style in one place. As a great starting point, you can take Prettier or Airbnb code style.

7.Regression tests

A significant amount of literature has already been written about the different types of tests, approaches, and how to write them properly. The only thing worth mentioning here is that no production application can survive without regression testing. That's why we focused all our efforts on creating a comprehensive end-to-end testing framework and based on it, we wrote tests that are linked with BDD scenarios and user stories. We used the Page Object pattern to organize our code and the Playwright framework for interacting with the browser. To test across different browsers, including Safari, you can use a solution called Moon. It can be deployed on one of your servers.

Conclusion

Thank you for taking the time to read this article! In conclusion, this article highlights key software engineering practices that enhance development processes and code quality. By adopting techniques such as backend response mocking, feature flags, error monitoring, performance optimization, code style standards, regression tests, and tracing, you can create more efficient and reliable software. Let's continue to improve our software and stay in touch! :)


The lead image for this article was generated by HackerNoon's AI Image Generator via the prompt "speed".