In my quest for a swift, intuitive, and streamlined authentication solution for my Node.js applications, I encountered scenarios demanding rapid implementation without compromising functionality.
From user signup and login to managing forgotten passwords, updating user data, and even account deletion, I sought a comprehensive solution that seamlessly navigates through these essential user interactions.
Thus, my article aims to present precisely that — a cohesive approach integrating clear methodologies to implement authentication and caching, ensuring a robust and efficient user flow.
Here, we’ll bypass the fundamental installation procedures and model creation, honing in directly on the intricacies of authentication and the user flow. We’ll include all necessary links to obtain configuration files throughout the article, ensuring seamless access to the resources needed for setup.
For this implementation, we’ll leverage Node.js version 20.11.1 alongside Knex, Express, and Redis. Additionally, we’ll utilize PostgreSQL as our database, which will be containerized and orchestrated using Docker for seamless management.
The name of our application will be user-flow-boilerplate
. Let’s create that folder and inside run npm init -y
to generate basic package.json
{
"name": "user-flow-boilerplate",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
Initial package.json
The next step is to add the necessary dependencies:
dependencies: npm i -S bcrypt body-parser cors dotenv express jsonwebtoken knex pg redis validator
devDependencies:
npm i -D @babel/core @babel/eslint-parser @babel/plugin-transform-class-properties @babel/plugin-transform-runtime @babel/preset-env @babel/preset-typescript @faker-js/faker @types/bcrypt @types/body-parser @types/cors @types/express @types/jest @types/jsonwebtoken @types/node @types/node-cron @types/validator @typescript-eslint/eslint-plugin @typescript-eslint/parser babel-jest cross-env eslint eslint-config-prettier eslint-plugin-prettier jest nodemon npm-run-all prettier ts-jest ts-loader ts-node tsconfig-paths tslint typescript webpack webpack-cli webpack-node-externals
and add scripts that will build and run our application:
"scripts": {
"start": "NODE_ENV=production node dist/bundle.js",
"build": "NODE_ENV=production webpack --config webpack.config.js",
"dev": "cross-env NODE_ENV=development && npm-run-all -p dev:*",
"dev:build": "webpack --config webpack.config.js --watch",
"dev:start": "nodemon --watch dist --exec node dist/bundle.js",
"test": "NODE_ENV=test jest --config ./jest.config.js",
"lint": "eslint ./src -c .eslintrc.json"
},
To ensure the smooth launch of our application, it’s essential to create a src
folder, and place our initial entry point file, index.ts
, within it.
require('dotenv').config();
import process from 'process';
import express from 'express';
import bodyParser from 'body-parser';
import cors from 'cors';
const app = express();
const PORT = process.env.PORT || 9999;
app.use(bodyParser.json());
app.use(cors());
app.get('/api/v1/health', (req, res) => res.status(200).json({ message: 'OK' }));
(async () => {
try {
app.listen(PORT, async () => {
console.log(`Server is running on port ${PORT}`);
});
} catch (error) {
console.error('Failed to start server:', error);
process.exit(1);
}
})();
Entrypoint file
For the development, we need to have settings for typscript
, lint
, jest
, bable
, prettier
, nodemon
. All of those files I described in the following article: Creating a Node.js Server With Postgres and Knex on Express.
After configuring all settings and creating the entry point, executing npm run dev
should initiate the server, and you should expect to see output similar to the following:
./src/index.ts 1.7 KiB [built] [code generated]
external "dotenv" 42 bytes [built] [code generated]
external "process" 42 bytes [built] [code generated]
external "express" 42 bytes [built] [code generated]
external "body-parser" 42 bytes [built] [code generated]
external "cors" 42 bytes [built] [code generated]
webpack 5.90.3 compiled successfully in 751 ms
[nodemon] restarting due to changes...
[nodemon] starting `node dist/bundle.js`
Server is running on port 9999
Next, navigate to GET
request, press cmd + E
(on Mac, but the keys depend on your OS), and name it as health
.
Add enter for URL: {{BASE_URI}}/health
. For BASE_URI
, add a new variable which you are going to use across the collection: http://localhost:9999/api/v1
Afterward, simply click the ‘Send’ button, and you should observe the response body:
{
"message": "OK"
}
Before moving forward, it’s crucial to have our database up and running. We’ll accomplish this by launching it with docker-compose
. To access and manage the database, you can utilize various development platforms like
Personally, I prefer using
We need .env
file with necessary keys, passwords, and test names:
PORT=9999
WEB_HOST="localhost"
# DB
DB_HOST="localhost"
DB_PORT=5432
DB_NAME="user_flow_boilerplate"
DB_USER="username_123"
DB_PASSWORD="SomeParole999"
# User
DEFAULT_PASSWORD="SomeParole999"
JWT_SECRET="6f1d7e9b9ba56476ae2f4bdebf667d88eeee6e6c98c68f392ed39f7cf6e51c5a"
# Test User
TEST_EMAIL="[email protected]"
TEST_USERNAME="test_username"
TEST_PASSWORD="SomeParole999"
# Redis
REDIS_HOST="localhost"
REDIS_PORT=6379
REDIS_DB=0
REDIS_PASSWORD="SomeParole999"
.env for connection to database, Redis, and test values for seeds
Fear not, I randomly generated the JWT_SECRET
to illustrate it in a more authentic manner. So, let’s create a docker-compose.yml
file at the root of the project:
version: '3.6'
volumes:
data:
services:
database:
build:
context: .
dockerfile: postgres.dockerfile
image: postgres:latest
container_name: postgres
environment:
TZ: Europe/Madrid
POSTGRES_DB: ${DB_NAME}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
networks:
- default
volumes:
- data:/var/lib/postgresql/data
ports:
- "5432:5432"
restart: unless-stopped
redis:
image: redis:latest
container_name: redis
command: redis-server --requirepass ${REDIS_PASSWORD}
networks:
- default
ports:
- "6379:6379"
restart: unless-stopped
docker-compose file with services
We’re going to spin up two services in Docker for rapid connectivity. I’ve streamlined this process to facilitate quick access to the database or Redis, allowing us to retrieve data efficiently. So, let’s run those services docker-compose up
, and we have to be able to see the output after docker ps
following output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e4bef95de1dd postgres:latest "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:5432->5432/tcp postgres
365e3a68351a redis:latest "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:6379->6379/tcp redis
Now, we need to create the src/@types/index.ts
file where we store our types for application:
export enum Role {
Admin = 'admin',
Blogger = 'blogger',
}
export type UserSession = {
id: number;
};
export type DatabaseDate = {
created_at: Date;
updated_at: Date;
};
export type DefaultUserData = {
role: Role;
};
export interface User extends DatabaseDate {
id: number;
email: string;
username: string;
password: string;
role: Role;
}
Types for service
At this moment, you need to have knexfile.ts
in the root of the project and database folder for connection, migrations, and seeds.
I left a pretty detailed explanation in Creating a Node.js Server With Postgres and Knex on Express article about how to migrate and seed users to the database where we are using those env variables.
I’d like to specifically check the migrations to ensure that we’re on the same page. We already launched our services, and we have to be able to check the connection to the database.
docker exec -it postgres psql -U username_123 user_flow_boilerplate
If the connection is good, then you will be in the psql
console. Ok, if the connection has no problems, then we should be able to migrate there our tables. Run knex migrate:latest
. Then you should observe the newly added columns in your users
table within the database.
Let’s seed it with fake data knex seed:run
, and check the table again.
So, we are now equipped to manipulate the database, allowing us to add, delete, or update users as needed.
Finally, we can forget about settings and preparation and focus on user flow specifically. For that, we need to create a router. We need to handle by that router the following operations: login
, logout
, signup
, delete_user
, update_user
.
For that, on src/routes/index.ts
, add the following code:
import { Router } from 'express';
import { authRouter } from 'src/routes/authRouter';
import { healthController } from 'src/controllers/healthController';
import { sessionController } from 'src/controllers/sessionController';
import { authMiddleware } from 'src/middlewares/authMiddleware';
import { userRouter } from 'src/routes/userRouter';
export const router = Router({ mergeParams: true });
router.get('/health', healthController);
router.use('/auth', authRouter);
router.get('/session', authMiddleware, sessionController);
router.use('/user', authMiddleware, userRouter);
router.use((_, res) => {
return res.status(404).json({ message: 'Not Found' });
});
Routes file
As you can see, in the beginning, we added /health
route which we already checked. So then, let’s update the entry point to apply those routes there. First, remove previous get
.
-> REMOVE -> app.get('/api/v1/health', (req, res) => res.status(200).json({ message: 'OK' }));
and add to the top of the file:
import { router } from 'src/routes';
// ...
app.use(cors());
app.use('/api/v1', router);
and create the first controller for health
check src/controllers/healthController.ts
with code:
import { Request, Response } from 'express';
export const healthController = (_: Request, res: Response) => res.status(200).send('ok');
Health Controller
Now, let’s back to the router, and let’s check what we have to add more to the routes. We need to add two more files: authRouter.ts
and userRouter.ts
import { Router } from 'express';
import { signUpController } from 'src/controllers/auth/signUpController';
import { loginController } from 'src/controllers/auth/loginController';
export const authRouter = Router();
authRouter.post('/signup', signUpController);
authRouter.post('/login', loginController);
Auth Router
import { Router } from 'express';
import { updateUserController } from 'src/controllers/user/updateUserController';
import { deleteUserController } from 'src/controllers/user/deleteUserController';
import { logoutController } from 'src/controllers/user/logoutController';
import { updatePasswordController } from 'src/controllers/user/updatePasswordController';
export const userRouter = Router();
userRouter.patch('/', updateUserController);
userRouter.delete('/', deleteUserController);
userRouter.post('/logout', logoutController);
userRouter.post('/update-password', updatePasswordController);
User Router
I’ve divided this logic for the sake of readability and responsibility to maintain isolated functionality. All of those routes need controllers where we going to handle the logic.
Auth and health routes don’t need authentication middleware, so those routes are not protected, but if there is no match, we’re going to get a status 404.
router.get('/health', healthController);
router.use('/auth', authRouter);
Now, as we have settled up all routes, we have to set the user model.
I’ll be utilizing a base model for the user model, from which I’ll reuse CRUD methods. While I’ve previously covered model creation in another src/models/Model.ts
import { database } from 'root/database';
export abstract class Model {
protected static tableName?: string;
protected static get table() {
if (!this.tableName) {
throw new Error('The table name must be defined for the model.');
}
return database(this.tableName);
}
public static async insert<Payload>(data: Payload): Promise<{
id: number;
}> {
const [result] = await this.table.insert(data).returning('id');
return result;
}
public static async updateOneById<Payload>(
id: number,
data: Payload
): Promise<{
id: number;
}> {
const [result] = await this.table.where({ id }).update(data).returning('id');
return result;
}
public static async delete(id: number): Promise<number> {
return this.table.where({ id }).del();
}
public static async findOneById<Result>(id: number): Promise<Result> {
return this.table.where('id', id).first();
}
public static async findOneBy<Payload, Result>(data: Payload): Promise<Result> {
return this.table.where(data as string).first();
}
}
Base model
With the base model, we have to be able to create UserModel.ts
in the same folder:
import { Model } from 'src/models/Model';
import { Role, User, DefaultUserData } from 'src/@types';
export class UserModel extends Model {
static tableName = 'users';
public static async create<Payload>(data: Payload) {
return super.insert<Payload & DefaultUserData>({
...data,
role: data.role || Role.Blogger,
});
}
public static findByEmail(email: string): Promise<User | null> {
return this.findOneBy<
{
email: string;
},
User
>({ email });
}
public static findByUsername(username: string): Promise<User | null> {
return this.findOneBy<
{
username: string;
},
User
>({ username });
}
}
User Model
In the model of user, I set role
just by default if not provided from the payload. And now that we have our models ready, we can proceed to utilize them within our controllers and middlewares.
The auth middleware in a Node.js application is responsible for authenticating incoming requests, ensuring that they are coming from valid and authorized users.
It typically intercepts incoming requests, extracts authentication tokens or credentials, and verifies their validity against a predefined authentication mechanism, such as JWT (JSON Web Tokens) in this case.
If the authentication process succeeds, the middleware allows the request to proceed to the next handler in the request-response cycle. However, if authentication fails, it responds with an appropriate HTTP status code (e.g., 401 Unauthorized) and optionally provides an error message.
Create folder src/middlewares
, and add there a file authMiddleware.ts
with the following code:
import { jwt } from 'src/utils/jwt';
import { Redis } from 'src/redis';
import type { Request, Response, NextFunction } from 'express';
import type { UserSession } from 'src/@types';
export async function authMiddleware(req: Request, res: Response, next: NextFunction) {
const authHeader = req.headers['authorization'];
const token = authHeader && authHeader.split(' ')[1];
const JWT_SECRET = process.env.JWT_SECRET;
if (!token) return res.sendStatus(401);
if (!JWT_SECRET) {
console.error('JWT_SECRET Not Found');
return res.sendStatus(500);
}
if (!token) return res.status(401).json({ error: 'Token not provided' });
try {
const userSession = await jwt.verify<UserSession>(token);
if (!userSession) {
return res.sendStatus(401);
}
const storedToken = await Redis.getSession(userSession.id);
if (!storedToken || storedToken !== token) {
return res.sendStatus(401);
}
req.user = userSession;
next();
} catch (error) {
console.error('JWT_ERROR', error);
return res.sendStatus(401);
}
}
Auth middleware file
The auth middleware extracts the JWT token from the request header, verifies its validity using the JWT library, and checks if the token matches the one stored in Redis.
If the token is valid and matches the stored token, the middleware sets the authenticated user session on the request object (req.user
) and calls the next()
function to pass control to the next middleware or route handler. Otherwise, it responds with a 401 status code indicating authentication failure.
Let’s review the util for jwt. Create in src/utils/jwt.ts
file with the following code:
require('dotenv').config();
import jsonwebtoken from 'jsonwebtoken';
const JWT_SECRET = process.env.JWT_SECRET as string;
export const jwt = {
verify: <Result>(token: string): Promise<Result> => {
if (!JWT_SECRET) {
throw new Error('JWT_SECRET not found in environment variables!');
}
return new Promise((resolve, reject) => {
jsonwebtoken.verify(token, JWT_SECRET, (error, decoded) => {
if (error) {
reject(error);
} else {
resolve(decoded as Result);
}
});
});
},
sign: (payload: string | object | Buffer): Promise<string> => {
if (!JWT_SECRET) {
throw new Error('JWT_SECRET not found in environment variables!');
}
return new Promise((resolve, reject) => {
try {
resolve(jsonwebtoken.sign(payload, JWT_SECRET));
} catch (error) {
reject(error);
}
});
},
};
JWT utility file
This utility serves a critical role in handling JSON Web Tokens within the Node.js application. The jwt
object exports functions for both signing and verifying JWTs, leveraging the jsonwebtoken
library. These functions facilitate the creation and validation of JWTs, essential for implementing authentication mechanisms in the application.
Utility encapsulates the functionality for handling JWTs, ensuring secure authentication mechanisms within the Node.js application while adhering to best practices for environment variable management.
Used as a database, cache, and message broker. Commonly used in a variety of use cases, including caching, session management, real-time analytics, messaging queues, leaderboards, and more.
Checking the token from Redis serves as an additional layer of security and validation for the JWT token. Let’s dive into the settings. For that, create in file src/redis/index.ts
with the following code:
require('dotenv').config({
path: '../../.env',
});
import process from 'process';
import * as redis from 'redis';
const client = redis.createClient({
url: `redis://:${process.env.REDIS_PASSWORD}@${process.env.REDIS_HOST}:${process.env.REDIS_PORT}`,
});
client.on('error', error => console.error('Redis Client Error', error));
const connect = async () => {
try {
await client.connect();
console.log('Connected to Redis');
} catch (err) {
console.error(`Could not connect to Redis: ${err}`);
process.exit(1);
}
};
class Redis {
public static setSession(userId: number, token: string) {
if (!userId) throw new Error('userId is required');
if (!token) throw new Error('token is required');
try {
return client.set(`session:${userId}`, token);
} catch (error) {
console.error(error);
}
}
public static getSession(userId: number) {
if (!userId) throw new Error('userId is required');
return client.get(`session:${userId}`);
}
public static deleteSession(userId: string) {
if (!userId) throw new Error('userId is required');
try {
return client.del(`session:${userId}`);
} catch (error) {
console.error(error);
}
}
}
export { client, connect, Redis };
Redis Session Store
By Redis, we are going to store and manage user session tokens. In the auth middleware, after verifying the JWT token’s authenticity, the middleware checks whether the token exists and matches the one stored in Redis for the corresponding user session. This helps ensure that only valid and authorized users can access protected routes.
Redis is used as a key-value store to maintain user session tokens. When a user logs in or authenticates, their session token is stored in Redis. This allows for efficient and fast retrieval of session tokens during subsequent authentication checks.
Redis is utilized in the auth middleware for efficient session management, while the Redis-related file handles the configuration and connection to the Redis server and provides functions for interacting with Redis in other parts of the application.
This setup ensures secure and reliable authentication mechanisms, with user session tokens stored and managed in Redis.
The last part is we have to connect to Redis in our entry point:
// all imports
import * as Redis from 'src/redis';
const app = express();
const PORT = process.env.PORT || 9999;
// middlewares
(async () => {
try {
await Redis.connect();
app.listen(PORT, async () => {
console.log(`Server is running on port ${PORT}`);
});
} catch (error) {
console.error('Failed to start server:', error);
process.exit(1);
}
})();
Connect to Redis
After completing the authentication preparation, we can now shift our focus to the controllers.
Controllers in routes help to organize the application’s logic by separating concerns and promoting code maintainability. We’ve already created the controller for the health check. Next, we’ll proceed to create controllers for handling operations with the user.
The first controller that we are going to take is sessionController.ts
which has to be in src/controllers
with the following code:
import { Request, Response } from 'express';
import { UserModel } from 'src/models/UserModel';
import type { User } from 'src/@types';
export const sessionController = async (req: Request, res: Response) => {
if (!req.user) return res.sendStatus(401);
try {
const user = await UserModel.findOneById<User>(req.user.id);
if (user) {
return res.status(200).json(user);
} else {
return res.sendStatus(401);
}
} catch (error) {
return res.sendStatus(500);
}
};
Session Controller
This controller serves the purpose of handling a session-related endpoint, likely responsible for retrieving information about the currently authenticated user. We need this controller for the following reasons:
User Session Information: This controller allows the application to retrieve information about the user’s session, such as their user profile or other relevant data. This information can be useful for customizing the user experience or providing personalized content based on the user’s profile.
Authentication and Authorization: By checking if req.user
exists, the controller ensures that only authenticated users can access the endpoint. This helps enforce authentication and authorization rules, ensuring that sensitive user data is only accessible to authorized users.
User Profile Retrieval: The controller queries the database (using the UserModel
) to retrieve the user's information based on their session ID. This allows the application to fetch user-specific data dynamically, providing a tailored experience for each user. This part definitely can be improved by Redis cache:
import { Request, Response } from 'express';
import { UserModel } from 'src/models/UserModel';
import { Redis } from 'src/redis';
import type { User } from 'src/@types';
export const sessionController = async (req: Request, res: Response) => {
if (!req.user) return res.sendStatus(401);
try {
const cachedProfile = await Redis.getSession(req.user.id);
if (cachedProfile) {
return res.status(200).json(JSON.parse(cachedProfile));
} else {
const user = await UserModel.findOneById<User>(req.user.id);
if (user) {
await Redis.setSession(req.user.id, JSON.stringify(user), CACHE_EXPIRATION);
return res.status(200).json(user);
} else {
return res.sendStatus(401);
}
}
} catch (error) {
console.error('Error retrieving user profile:', error);
return res.sendStatus(500);
}
};
Session Controller file with Redis set session
We define a constant CACHE_EXPIRATION
to specify the cache expiration time in seconds. In this example, it's set to 3600 seconds (1 hour). Cached data is periodically refreshed, preventing stale data from being served to users and maintaining data integrity within the cache.
Before proceeding to create the signUpController
, which manages the sign-up process for new users in our application, let’s review the schema:
In our case, when attempting to sign up with an existing in the database email, we prioritize user privacy by not explicitly revealing whether the user exists. Instead, we inform the client with a generic message stating Invalid email or password
.
This approach encourages the client to submit valid credentials without disclosing unnecessary information about existing users.
Now let’s create src/controllers/auth/signUpController.ts
, and add the following code:
import bcrypt from 'bcrypt';
import { jwt } from 'src/utils/jwt';
import { Request, Response } from 'express';
import { validate } from 'src/helpers/validation/validate';
import { userSchema } from 'src/helpers/validation/schemas/userSchema';
import { UserModel } from 'src/models/UserModel';
import { Redis } from 'src/redis';
import type { User } from 'src/@types';
import { getRandomString } from 'src/utils/getRandomString';
type Payload = Omit<User, 'id' | 'created_at' | 'updated_at' | 'role'>;
export async function signUpController(req: Request, res: Response) {
const { email, password }: Payload = req.body;
const validation = validate<Payload>(req.body, userSchema);
if (!validation.isValid) {
return res.status(400).send(`Invalid ${validation.invalidKey}`);
}
try {
const user = await UserModel.findOneBy({ email });
if (user) {
return res.status(400).json({ message: 'Invalid email or password' });
}
const hashedPassword = (await bcrypt.hash(password, 10)) as string;
const username = `${email.split('@')[0]}${getRandomString(5)}`;
const createdUser = await UserModel.create<Payload>({
email,
password: hashedPassword,
username,
});
const token = await jwt.sign({
id: createdUser.id,
});
await Redis.setSession(createdUser.id, token);
res.status(200).json({
token,
});
} catch (error) {
return res.sendStatus(500);
}
}
Sign Up Controller
The controller receives a request containing the user’s email and password, typically from a sign-up form. It validates the incoming data against a predefined userSchema
to ensure it meets the required format.
If the validation passes successfully, indicating no existing user and valid fields, the controller proceeds to hash the password using bcrypt.hash
, generates a username
, and creates the user using UserModel.create
.
Finally, it generates a token
using jwt
, sets the session
data in Redis
, and sends the token
back to the user.
Now, let’s focus on the creation of a login controller. Create file src/controllers/auth/loginController.ts
:
require('dotenv').config({
path: '../../.env',
});
import bcrypt from 'bcrypt';
import { Request, Response } from 'express';
import { jwt } from 'src/utils/jwt';
import { UserModel } from 'src/models/UserModel';
import { Redis } from 'src/redis';
export async function loginController(req: Request, res: Response) {
const { email, password } = req.body;
if (!email || !password) {
return res.status(400).json({ message: 'Invalid email or password' });
}
try {
const user = await UserModel.findByEmail(email);
if (user) {
const isValidPassword = await bcrypt.compare(password, user.password);
if (!isValidPassword) {
return res.status(400).json({ message: 'Invalid email or password' });
}
const token: string = await jwt.sign({
id: user.id,
});
await Redis.setSession(user.id, token);
res.status(200).json({ token });
} else {
return res.status(400).json({ message: 'Invalid email or password' });
}
} catch (error) {
console.error(error);
return res.sendStatus(500);
}
}
Login Controller
Essentially, we begin by validating the provided fields and then checking for the existence of a user. If no user is found, we respond with a 400 status code along with the message Invalid email or password
, similar to the behavior in the signupController
.
If a user exists, we proceed to compare the provided password with the hashed password stored in the database using bcrypt.compare
.
If the passwords do not match, we respond with the familiar message ‘Invalid email or password.’ Finally, upon successful authentication, we generate a token, set the session in Redis, and send the token back to the client.
Let’s review our protected controllers, which depend on the presence of a user_id obtained from middleware. We consistently rely on this user_id for operations within these controllers. In cases where the request lacks an authorization
header, we must respond with a 401
status code.
const authHeader = req.headers['authorization'];
Create file src/controllers/user/logoutController.ts
with the following code:
import type { Request, Response } from 'express';
import { Redis } from 'src/redis';
export async function logoutController(req: Request, res: Response) {
try {
await Redis.deleteSession(req.user.id);
return res.sendStatus(200);
} catch (error) {
return res.sendStatus(500);
}
}
Logout Controller
This logoutController
, is responsible for logging out a user from the system. Upon receiving a request, it interacts with the Redis client to delete the session associated with the user.id
. If the operation is successful, it responds with a 200
status code to indicate successful logout.
However, if an error occurs during the process, it responds with a 500
status code to signal an internal server error.
Next, let’s address the deletion of user data.
Create src/controllers/user/deleteUserController.ts
, and add this code:
import { Request, Response } from 'express';
import { UserModel } from 'src/models/UserModel';
import { Redis } from 'src/redis';
export const deleteUserController = async (req: Request, res: Response) => {
const user_id = req.user.id;
try {
await Redis.deleteSession(user_id);
await UserModel.delete(user_id);
return res.sendStatus(200);
} catch (error) {
return res.sendStatus(500);
}
};
Delete user controller
When a request is received, it extracts the user ID from the request object, typically obtained from the authentication middleware.
Subsequently, it proceeds to delete the session associated with this user_id
from Redis using the Redis client. Afterward, it invokes the delete
method of the UserModel
to remove the user's data from the database.
Upon successful deletion of both the session and the user data, it responds with a 200
status code to indicate successful deletion. In the event of an error during the deletion process, it responds with a 500
status code to signify an internal server error.
To update the user data in the system create src/controllers/user/updateUserController.ts
, and add the following code to the file:
import { Request, Response } from 'express';
import { UserModel } from 'src/models/UserModel';
import { filterObject } from 'src/utils/filterObject';
type Payload = {
first_name?: string;
last_name?: string;
username?: string;
};
export const updateUserController = async (req: Request, res: Response) => {
const { first_name, last_name, username } = req.body;
const payload: Payload = filterObject({
first_name,
last_name,
username,
});
try {
const existingUserName = await UserModel.findByUsername(username);
if (existingUserName) {
return res.status(400).json({
error: 'Invalid username',
});
}
const updatedUser = await UserModel.updateOneById<typeof payload>(req.user.id, payload);
res.status(200).json(updatedUser);
} catch (error) {
res.sendStatus(500);
}
};
Update User Controller
Upon receiving a request, it extracts the fields first_name
, last_name
, and username
from the request body. Next, it filters these fields using the filterObject
utility function to ensure that only valid fields are included in the payload.
Subsequently, it checks if the provided username
already exists in the database. If it does, the controller responds with a 400
status code and an error message indicating an invalid username
. If the username
is unique, the controller proceeds to update the user data in the database using the updateOneById
method of the UserModel
.
Upon successful update, it responds with a 200
status code and the updated user data. In case of any errors during the update process, the controller responds with a 500
status code to signify an internal server error.
The last one will be to update the password, pretty much the same idea as updating the user data, but with hashing the new password. Create the last controller from our list src/controllers/user/updatePasswordController.ts
, and add the code:
import { Request, Response } from 'express';
import { UserModel } from 'src/models/UserModel';
import bcrypt from 'bcrypt';
export const updatePasswordController = async (req: Request, res: Response) => {
try {
const { password } = req.body;
if (!password) return res.sendStatus(400);
const hashedPassword = (await bcrypt.hash(password, 10)) as string;
const user = await UserModel.updateOneById(req.user.id, { password: hashedPassword });
return res.status(200).json({ id: user.id });
} catch (error) {
return res.sendStatus(500);
}
};
Update Password controller
Upon receiving a request, it extracts the new password from the request body. It then checks if a password is provided in the request body. If not, it responds with a 400
status code, indicating a bad request. Next, it hashes the new password using the bcrypt
library with a salt factor of 10.
The hashed password is then stored securely in the database using the updateOneById
method of the UserModel
, associating it with the user.id
. Upon successful password update, the controller responds with a 200
status code and a JSON object containing the user’s ID.
In case of any errors during the password update process, the controller responds with a 500
status code to indicate an internal server error as in other controllers.
Ensure to review and set up the validation helper and utilities from the
Let’s check the signup endpoint:
As evident, we have obtained a token, which will be utilized in the header to retrieve the session.
We sent the authorization token in the header to the server, and in response, the server provided us with the user data retrieved from the database.
Feel free to explore and experiment with security features and Redis caching. With the foundational model in place, you can delve into additional functionalities, such as account recovery for users who forget their passwords. However, this topic will be reserved for a future article.
Managing routing and user authentication flow in a scalable manner can be challenging. While we’ve implemented middleware to safeguard routes, there are additional strategies available to enhance the performance and reliability of the service.
There is further enhanced user experience by providing clearer error messages, as error handling remains a significant aspect that requires more comprehensive coverage. However, we’ve successfully implemented the primary authentication flow, enabling users to sign up, access their accounts, retrieve session data, update user information, and delete accounts.
I hope you found this journey insightful and gained valuable knowledge into user authentication.
Also published here