A React application usually needs some data from the server to be stored locally for immediate use, mainly to be shown on the page. If the application works with a complex relational database, this may be a bit of a problem. Here I will try to describe the issues with data organizing that we encountered in one of the projects at DashBouquet and how we managed to solve them.
Let’s start with an example. Say, we have an application with multiple pages, every one of which needs some data from the server. If there are enough pages, the amount of data becomes too big to be fetched at once when the app is loaded. So the data is requestd every time when we load the page. For every page we probably have a partition in our store to save page-related data, so we might want to put fetched data there as well.
Our app will have several pages. First — list of teachers in a university, second — list of students. By clicking on teacher’s name, the user gets to a particular teacher’s page with lists of his students and his courses. The same goes for a student — by clicking on the name we get to the student’s page with teachers and courses lists.
The ER diagram would look like this:
Fig. 1. Entity-Relationship Diagram
And we can decide what data we need on which page:
Fig.2. Division of entities by pages
I will describe the example with the help of the stack that we used in the project. It’s Loopback for back-end, React, Redux and Redux-saga for front-end and Axios for interaction with the server.
As we can see, we may need the same data for different pages. For example, we need all the teachers on Teachers page and some teachers on a Single student page. So where do we store them? Maybe we don’t always need to fetch the teachers on Single student page? If we decide not to fetch the teachers, where do we take the data? If we decide to fetch on both pages where do we put the data to avoid duplication? All this may be quite confusing when data is spread across the whole store, especially if the app has more than four pages and there is a lot of data.
It may be better to keep all the data in one place. There is also a question how to fetch it. We definitely don’t want to fetch the entities separately, because this way we would easily reach the point of 30–50 requests per page, which would make page loading way too slow. We want to try and get as much data as possible in one request (e.g. using an include filter in case of Loopback). In the response for teachers we get something like this (shape of a server response):
\[
{
**students**: \[
{
id,
name,
**studentCourses**: \[
{
studentId,
courseId,
grande,
},
...
\],
},
...
\],
**courses**: \[
{
id,
name,
},
...
\],
},
...
\]
Here you can read about how to combine data in queries, if you also use Loopback.
In this particular case this would spare us two requests, but this is not very useful if we speak about storing on front-end. Firstly, what if something changes in the database, e.g. new courses are added for a teacher? Instead of refetching the courses, we would have to refetch both courses and teachers. Secondly, we might need courses inside of a student entity instance, but when we make a query to the server, we don’t include these courses to avoid duplication.
To cope with the described issues we can start with Normalizr — a utility that normalizes data with nested objects, just like in our case. I will not say a lot about it: you can find all information here. The point is that after applying few simple manipulations to the result of Normalizr’s work we get data that we can keep in store.
We need to define a couple of sagas. If you haven’t used redux-saga in your projects yet, I think this should convince you to do so.
The first saga will fetch data:
const urls = {teachers: '/teachers,students: '/students’,courses: '/courses',};
export function* fetchModel(model, ids, include) {const url = urls[model];const where = {id: {inq: ids}};const filter = {where, include};const params = {filter};return yield get({url, params});}
The second will store the data:
export function* queryModels(modelName, ids, include]) {const singleModelSchema = schema[modelName];const denormalizedData = yield fetchModel(modelName, ids, include);const normalizedData = normalize(denormalizedData, [singleModelSchema]);
**const** {entities} = normalizedData;
**yield put**(addEntities(entities));
}
And a reducer will add new pieces of data to the already fetched ones:
case ADD_ENTITIES: {const models = action.entities;const newState = cloneDeep(state);return mergeWith(newState, models);}
Methods mergeWith and cloneDeep here are from lodash.
Having done all that we can query data from server in this manner (selector):
export function* fetchTeachers() {yield queryModels('teachers', ids, [{relation: 'students',scope: {include: ['studentCourses'],}},{relation: 'courses',}]);}
Normalizr uses a normalization schema as described here.
Eventually we end up with the state that looks like this:
**state**: {
...
**models**: {
teachers: {...},
sudents: {...},
courses: {...},
studentCourses: {...},
},
...
}
This basically is a nice little copy of a part of our database in the store. There is no need to dispatch plenty of actions to put fetched data to different sections of the store and to remember where every piece of data should be stored. It is done in queryModels saga and we always know where the fetched data is going to be put.
After that we can use it in any page of the app combing it in selectors as required.
In our case, if needed, we can get an object for teacher as complicated as this (denormalized data):
{
**students**: \[
{
id,
name,
**studentCourses**: \[...\],
**courses**: \[...\],
},
...
\],
**courses**: \[
{
id,
name,
**sudents**: \[...\],
**teachers**: \[...\],
},
...
\],
}
There is also another way. We can describe an all-purpose API to denormalize the data before using it. The problem here is that we need an API to denormalize data, because the denormalize function that comes along with Normalizr package would only denormalize data to a shape it was when it came from the server, which is not exactly what we want. As described above, we got courses only within teacher entities, though we might need them anywhere else. For larger projects, I think, it is worth spending some time to come up with a custom denormalization function. However, it is a topic for another article.
For me it was quite a relief when we started using this approach in our project. The main advantage here is that you always know where to find what you need. And in case you decide that it would be better to aggregate data in another way, you don’t need to mess with the queries again, you just change a selector a little. In general, managing data in store requires much less effort with this approach.
Missing part of Redux Saga Experience_Redux saga is a middleware between an application and redux store, that is handled by redux actions. This means, it can…_hackernoon.com
Usage of Reselect in a React-Redux Application_Why reselect is so good_hackernoon.com
Using Normalizr to organize data in store. Part 2_The second part of the article about how to use Normalizr to organize data in stores._hackernoon.com
How to Stop Using Callbacks and Start Living_Javascript has two major ways of dealing with asynchronous tasks — callbacks and Promises. In general Promises are…_hackernoon.com
Written by Ilya Bohaslauchyk