This Slogging thread by Arthur Tkachenko occurred in slogging's official #programming channel, and has been edited for readability.
Today I want to introduce a simple module for parsing CSV files
Recently I was exploring my old repository: https://github.com/Food-Static-Data/food-datasets-csv-parser
Inside I have a cool set of small modules, that helped me a lot. As my code is highly tied to those packages -> I need to pray for developers that build them, so I don't need to spend precious time.
List of modules that I'm using:
Why did I create this package? It's simple. During our work @ Groceristar, we came around a number of databases and datasets, related to "food-tech". To be able to extract that data and play with it -> you need to parse CSV files.
Link to the repository: https://github.com/Food-Static-Data/food-datasets-csv-parser
Link to npm page: https://www.npmjs.com/package/@groceristar/food-dataset-csv-parser
I will also post updates about building modules for static data on indie hackers. While it didn't help with promotions a lot, founders are pretty positive people and their support really matters. Here is an org that I created few years ago: https://www.indiehackers.com/product/food-static-data
As usually, experienced developers might tell me that I'm stupid and CSV parsing is a mundane procedure. But I don't care. I realized that for a few separate projects we are running similar code. So I decided to isolate it.
I did it a few times before I finally find a way to make it work as I like. And you can see how it looks right now.
I can say, not ideal, but it was working fine for me. Right now I plan to revamp this package a little bit, in order to make it work with the latest versions of rollupjs and babel.
While the idea is simple: connecting a dataset in CSV format, parsing it, and exporting data as you need it, while you need to make it work with 10 different datasets, things arent as easy as they sound in your head.
CSVs not only related to food tech datasets. But for me was important to be able to use different datasets and easy to play with it. It makes other modules that we are building data-agnostic and more independent to a database/frameworks/logic. Basically, around this idea, we created and optimized like 13 repositories. Recently I created a separate organization that will be focused on those repositories only.
Link: https://github.com/Food-Static-Data
Later I plan to remove some repositories when they wouldn't be replaced by other, more popular, and stable tools. This current module can be useful for parsing other datasets too. But making it separate from the food tech topic isn't my task at this point.
And I was able to include and implement cool and important packages, like husky and coveralls. I can't say that I get most from them, but at the same time, it helped me to jump into the "open source ocean" that related to the GitHub rabbit hole that I'm still exploring for so many years.
and it's good to not just type another line of code, but also be able to see that your codebase is solid and nothing breaking it behind your back
CodeClimate(https://codeclimate.com/) helped me to explore and be able to take another look at how to develop things.
Yeah, codeclimate shows that I have code duplicates and ~50h of tech debt. Looks like a lot, right? But this is a small independent package.
Imagine how much tech debt has your huge monolith project. Years of wasted time of developers, probably 🙂
At some point i'll remove duplicates and it will reduce number of hours on this page.
Plus, usually, your product owner or CTO is busy and can't review code and be able to track things inside your code.
CodeClimate can do some stuff for you. Just check those settings. Plus, they support open-source movement. So if your code is open and located on GitHub, you can use it for free.
Stretch goals are simple
We even did a great readme file with an explanation of how to run this package