A journey from the origins of computing and data analytics to what we now call the "Modern Data Stack". What comes next? The Origins of Computing and Data Analytics The origins of computing and began in the mid-1950s and started taking shape with the introduction of SQL in 1970: data analytics 1954: - “ ”, machine translation of Russian to English. Natural Language Processing (NLP) Georgetown-IBM experiment 1960: Punch Cards 1970: Structured Query Language (SQL) 1970s: - Create a language to “allow executives to build models without intermediaries” Interactive Financial Planning Systems 1972: - One of the earliest applications of modern computing, a natural language information retrieval system, helped geologists access, compare and evaluate chemical-analysis data on moon rock and soil composition C, LUNAR 1975: - The first Online Analytical Processing (OLAP) system, intended to analyse business data from different points of view Express 1979: - The first spreadsheet computer program VisiCalc 1980s: - “Computerized Collaborative Work System” Group Decision Support Systems The “Modern Data Stack” The "Modern Data Stack" is a set of technologies and tools used to collect, store, process, analyze, and visualize data in a well-integrated cloud-based platform. Although QlikView was pre-cloud, it is the earliest example of what most would recognize as an analytics dashboard used by modern platforms like Tableau and PowerBI: 1994: - “Dashboard-driven Analytics” QlikView 2003: Tableau 2009: - “Computational Search Engine” Wolfram Alpha 2015: PowerBI 2017: - “Search-driven Analytics” ThoughtSpot Paper, Query Languages, Spreadsheets, Dashboards, Search, what next? Some of the most innovative analytics applications, at least in terms of user experience, convert human language to some computational output: A tale as old as time, LUNAR was first developed in the 70s to help geologists access, compare, and evaluate chemical analysis data using natural language. Salesforce WikiSQL introduced the first extensive compendium of data built for the text-to-SQL use case but only contained simple SQL queries. The Yale Spider dataset introduced a benchmark for more complex queries, and most recently, BIRD introduced real-world “dirty” queries and efficiency scores to create a proper benchmark for text-to-SQL applications. Text-to-SQL: Wolfram Alpha, ThoughtSpot Text-to-Computational-Language: ChatGPT Advanced Data Analysis Text-to-Code: Is "Conversation-Driven Data Analytics" a natural evolution? UX of modern analytics interfaces like , becoming more intuitive, enabled by NLP and LLMs search and chat are evolving Analytics interfaces have origins in but decision-makers are still largely enabling decision-makers, reliant on data analysts Many decision-maker , best suited to “throwaway analytics” queries are ad-hoc Insight generation is a where many insights are gained in conversations about data, possibly with peers creative process The data analytics , from the imagination of analysis to the presentation of results workflow is disjointed Acknowledgments Dates for the section "The Origins of Computing and Data Analytics" thanks to and . https://web.paristech.com/hs-fs/file-2487731396.pdf http://dssresources.com/history/dsshistoryv28.html Also published here.