paint-brush
Google’s New AI Model, NotebookLM, will Rewrite the Academic Playbook Foreverby@theantieconomist
2,932 reads
2,932 reads

Google’s New AI Model, NotebookLM, will Rewrite the Academic Playbook Forever

by The Anti-EconomistOctober 6th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

NotebookLM is basically an AI model that allows you to input sources, and then can produce content based on those sources. In the world of Academia, this can mean essays, study guides, essay outlines, video scripts, summary's etc. This will permanently change the potential of technology to optimize information retention and processing.
featured image - Google’s New AI Model, NotebookLM, will Rewrite the Academic Playbook Forever
The Anti-Economist HackerNoon profile picture


Google’s new AI project NotebookLM is moving away from General AI models, and towards specified AI models that will allow you to interact with specific sources, and perform tasks related to the source content.


Google’s recent interest in AI

Since November 2022, the rapid proliferation of Generative AI technology and subsequent inflated commercial potential, have many Tech giants eager to get in on the ground floor of this emerging technology, hoping to capitalize on its future potential.


Google has recently begun the rollout of a new player, beginning the introduction of their new Generative AI model called NotebookLM. Publicly available only through a waitlist, Google is testing the AI with a select group of people, either specifically appointed to do so or chosen from NotebookLMs’ waitlist.


Google, being a leading tech giant, has the resources and expertise likely to propel its AI model beyond what we have previously thought capable.


While there isn’t much information available about the new tool, for those of us enthralled by the development of AI, the tectonic plates of the tech industry have already begun to shift. Sifting through the available information surrounding NotebookLM, this article covers all angles surrounding the product’s development, accessibility/pricing, and features.


Unveiling NotebookLM

Beginning in Late 2022. Google’s $300 million investment in generative AI developer Anthropic, was the beginning of a partnership involving the sharing of Google’s Cloud computing service Google Cloud, as well as Anthropic’s sharing of AI knowledge and expertise, which presumably led to the development of NotebookLM.


NotebookLM is unlike anything we have seen on the market yet; it’s a specialized language model that operates and interprets sources on the fly, interacting and analyzing the data using only the sourced information.


Everything we have seen so far, Open AI’s Chat GPT, and Google’s Bard, have been general Language Models that interact with a vast amount of knowledge to answer general questions, but NotebookLM offers users a way to learn about a specific topic using multiple credible sources in a much more time efficient manner.


Notebook LM is intended to be able to take a couple of sources on similar topics, quickly analyze the information, and be able to interact with it, providing a summary and overview of key points, and also allowing users to interact with and question specific aspects of the data like any other AI would.


Still in its development phase, the AI can currently only work with 5 sources under 10,000 words, however, we can expect the final product to allow users to input different types of sources containing larger sets of data.


The Technological Leap

For those who need to learn and familiarize themselves with large amounts of specified and credible data, generalized AI models like Claude, Chat GPT, and Bard, can only give them general overviews of broad topics based on basically the whole internet’s worth of information.


For students, researchers and professors, an AI that allows you to input selected sources surrounding a very specific topic, that will process the data, and allow you to ask specific questions while only pulling from the input sources, NotebookLM is an absolute game changer.


Say you’re a student, who needs to study for an important test, complete an assignment, or even do the reading and preparation for a class. I can open NotebookLLM and simply input sources relating to a specific topic, whether entire textbooks, research articles, or even books, and get a summary of the topic that the sources discuss, and ask questions about the specific sources’ different perspectives and nuances, and come up with potential questions to ask your professor.


Based on the input sources, I could even ask the AI to help me outline an essay around a specific question or ask it to come up with the top 20 most likely essay questions for an exam regarding that information, to help me prepare for a test.


It could also create video scripts, pitch decks, analysis, and who knows what more.


This is an absolute game changer when it comes to academia, and many students will find a tool like this essential to functioning when their workload is high.


Content Regulation

If we consider Anthropic and Google’s partnership and sharing of expertise in the development of this AI, it’s safe to assume that the principles of Anthropic’s regulation would apply similarly to NotebookLM.


Anthropic was founded by several former Open AI employees, who fundamentally disagreed with the ethics of Open AI’s billion-dollar partnership with Microsoft, raising flags related to the readiness of AI to produce virtuous and non-harmful content.


Unlike Open AI’s Chat GPT, Anthropic’s AI Claude, is governed by a constitutional AI model whose sole purpose is to regulate the output of another Language Model, based on a set of principles and values. On Top of already programming the AI to ethically regulate its output, Anthropic prides itself on adding an extra layer that prevents anything from slipping through the cracks.


Given their partnership on this project, it’s safe to assume that NotebookLM will have the same or a similar approach in terms of regulation.


Academic Ethics

Even with the rollout of the first set of AI LLM models such as GPT-1 in November 2022, the academic community immediately reacted by trying to place strict regulations on the usage of AI by students.


My college banned Chat GPT on the school Wi-Fi within 2 days of its first release!


The strict regulation is in response to concerns that students would use AI to write their essays and do their assignments, but as we got familiar with Language Models like Chat GPT, it was clear that the writing was not of significant substance and lacked writing skills as well as specificity.


As this new AI model gets closer to release, we are sure that it will revolutionize the intake and processing of credible information, and allow users to consume larger amounts of information in less time, but we are unsure of its capability to generate well-written academic essays.


If AI is developed enough to produce complex, well-structured, well-written essays that are cited, it would completely flip academia on its head, especially considering its been proven that ‘AI detectors’ have significant flaws and are able to accurately ascertain whether something is written by AI or not.


It will certainly be interesting to see how the academic community will adjust its policies and regulations to better regulate the use of NotebookLM, but I suspect that like the current popular AI models, it will not replace human ingenuity, but actually democratize the intake of information and allow neurodivergent people to better access complex information, as well as those who are unable to do so due to time constraints.


Perspective as a student; Academic ethics and AI usage

Through my online presence, I have and will continue to advocate for better usage of AI as an academic tool, and I cannot stress enough that AI is a tool to enhance your academics, NOT to replace labor.


However, I still think that AI is an extremely useful tool, and everyone should be able to use it responsibly to increase productivity.


Using AI unethically will not only probably get you a bad grade, but it can and will get you permanently dismissed from an academic institution.


I have seen it happen with one of my classmates.


IT IS NOT WORTH IT.


With that being said, I want to quickly discuss some things you need to keep in mind as a student who employs AI ethically.


Firstly, at the beginning of a class, discuss AI usage with your professor and find out their specific regulations and guidance when using AI. Professors are usually reasonable human beings, and being transparent with them is always in your best interest.


Secondly, be extremely transparent with your AI usage. If you can, try and use a writing document that tracks your progress in writing an assignment with time stamps, and with whatever AI language model make sure you keep track of the specific tasks and commands you use.


If you are smart and proactive, before an assignment is due go to a professor and let them know you used AI for one, two, or three things, and assure them that everything you did was within guidelines, and always offer to show them specific proof of everything that AI helped you to do.

It’s the best way to ensure you are compliant with your institution’s regulations.


Thirdly, be smart about your usage. The most I will ever take AI as a tool is to use it as a spell checker or to help me outline an essay. Using it to rephrase entire paragraphs is where you start to trek in dangerous territory.


Lastly; Make no mistake, your future is on the line.


If for some reason it comes out that you used AI unethically(and it very much can), you will likely lose any position and degree you have, even if it is well into your career.


Final thoughts; What to expect

NotebookLM is still currently available to a select few who are being used to test and train it, but as it rolls out, we can expect it to be as accessible and free as the rest of the Google suite.

As with other AI models, it will not be perfect and will continue to improve and new versions will be released over time.


We have seen the capability of Anthropic’s vast knowledge of AI, and we can expect to see a complete game changer.


It is also important to note that Google is simply not as ethical as many of us would like to believe, and it is not far-fetched to think that NotebookLM may be more profit-driven than we anticipate.


With all being said, look out for the release of this exciting new platform, as we wait to see how this affects academia, keep in mind how information absorption and retention could be transformed.


Also published here.