AI in Veterinary and Medical Fields: Grok, DeepSeek, and ChatGPT

Written by artcollecting | Published 2026/03/09
Tech Story Tags: ai | grok | chatgpt | deepseek | medicine | veterinarytechnology | veterinaryai | ai-in-healthcare

TLDRAI can be taught to parse instructions from pharmaceutical manufacturers to avoid situations where a doctor prescribes incompatible medications. via the TL;DR App

Imagine this: you have just received your test results and want to double-check your doctor’s conclusions. You upload these documents into an AI application and receive a response, one in which the AI carefully applies safety protocols. In certain situations, it might even advise you to consult a physician. And the fact that doctors absolutely need to be verified is something I learned firsthand while treating my cat, - our CEO shares her personal experience.

Medical Error

There are situations where circumstances might compel doctors to stray from the Hippocratic Oath and slide down a slippery slope, using treatment as a pretext to push additional services or to formulate a strategy that is not only ineffective but also harmful to a patient's life and health.

This is likely what I encountered when I turned to "4lapy.ru" ("4 лапы", "4 Paws" - eng.). This is a Russian pet care giant with a federal network of pet stores, veterinary pharmacies, and veterinary centers. Everything is conveniently located: they treat animals, sell food, and medicine all in one place. The sole business oversight of this federal network is the lack of a service for online consultations, so their doctors use a third-party service called Vetsy, where they aggressively push their consultations.

This turns into a never-ending saga, because the first consultation inevitably leads to a second, and the second to a third, always leaving some loose ends.

I paid for offline consultations that were conducted immediately after the in-person appointments. During these consultations, medications were prescribed for my cat. The last time, they also prescribed medication, and I bought it. But after unpacking the medicine at home, I realized I needed to ask some additional clarifying questions. The doctor refused to get in touch. The assistant answered my follow-up questions instead.

Why did these questions arise? I was shocked by the dosages I was supposed to give my cat. Visually, it was more than the volume of food I give him every day. But that is food, and this was medication. The doctor had met his sales target. And I took a risk (because he had treated my cat before and I trusted him): I gave everything that was prescribed.

My cat got sick. As it later turned out, the doctor had prescribed medications that were incompatible with each other. Now, for a month and a half, I have been treating my cat for intoxication. In addition to cystitis, he has started developing nephritis. The consequences of the intoxication are lingering for a long time.

What problems could AI have prevented?

I train AI for art authentication and participate in developing algorithms, where I am responsible for constructing logical processes. I believe that AI can also be trained in veterinary medicine and human medicine to prevent tragedies. Specifically, AI can be taught to parse instructions from pharmaceutical manufacturers to avoid situations where a doctor prescribes incompatible medications.

In the case of the "4lapy.ru" clinic, this involved the antibiotic Amclav combined with Vetom. I was sold Vetom without an instruction leaflet, which contradicts the law. However, the instructions are published on the manufacturer's website. In black and white, it states that Vetom should not be given together with antibiotics. Yet, the doctor prescribed Vetom specifically with antibiotics.

AI applications could verify doctors' prescriptions: they could parse instructions from drug manufacturers, check medications for compatibility with one another, verify dosages, and provide a quality result instead of the standard boilerplate response to "consult a doctor."

Grok Experience

Grok has less stringent safety protocols compared to other AI applications. Because of this, you can ask Grok more medical-related questions. Additionally, xAI, like OpenAI, has announced the possibility of implementing industry-specific solutions. I would add that veterinary medicine is one of the important industries.

But let me return to my own experience. I uploaded test results into Grok. Grok provided answers to the questions I asked and, of course, added that it is better to consult a doctor. I have stopped trusting doctors, so I conduct additional checks. And these checks are exactly what people who are not medically competent need. You could, of course, consult another doctor to verify the first one. But then you face a dilemma: what if they give completely different conclusions? What do you do then—rely on intuition? Seek out a third doctor? One option is to ask an AI.

For a while, Grok would answer. The problem, however, is that Grok can provide a specific answer to one specific question. But if you ask follow-up questions, Grok can lose the thread of the discussion. Grok also gets confused if you send it an observation log. Even if you add the date and time of your observations within the same thread, Grok becomes disoriented. It will confuse yesterday with today. The more information you upload into a thread, the more confused Grok becomes.

It would be good if developers could eliminate these kinds of errors. The future lies in industry-specific solutions.

Nevertheless, I received quite good recommendations from Grok for treating my cat's intoxication, particularly regarding subcutaneous fluid injections (Lactated Ringer's solution or saline). The veterinarians had not suggested fluid injections, and they considered an IV drip premature, even though his condition was clearly progressing in that direction. After I administered the subcutaneous fluids as recommended by Grok, my cat began to feel better.

DeepSeek and ChatGPT Experience

The responses from DeepSeek and ChatGPT are largely identical and, at times, come across as paranoid. The essence of their answers boils down to this: rush to the doctor immediately, or your cat won't survive another day. Often, the problem becomes hyperbolized—you might mention that your cat has been sleeping very deeply and for a very long time, and the AI will characterize this as a coma and urgently direct you to a veterinarian. This kind of confusion renders the AI useless.

At the same time, DeepSeek and ChatGPT are quick to find and categorize information sources. The question remains, however, about the depth of their search capabilities.

Working With Information Sources and Ranking Them

When working on an AI application for art authentication, I devoted a significant amount of time to identifying information sources and ranking them. We developed our own ranking algorithms, which were essential for drawing conclusions. Many popular LLM models lack this capability, and the developers of the most widely used ones will eventually have to address this issue by offering opportunities for launching industry-specific solutions.

If you open any AI application as a user, you will notice that the number of sources from which the AI gathers information is often limited, typically ranging from 10 to 20. However, Grok can be persuaded to parse more. In my case, I managed to get Grok to parse around 40 sources at once. When I asked why it parsed so few, Grok replied that parsing more was unnecessary because many materials were already pre-installed.

Overall, my experience with Grok was interesting because it ranks responses not only by "relevance" at the top of search results but also by the authority of the sources. Authoritative sources turn out to be trusted medical websites, whereas DeepSeek's answers tend to include results based on the most popular articles in search engine results.

However, alongside authoritative sources, Grok also parses posts from social networks, where inaccuracies can appear. These are user-generated posts, not specialized literature.

Why users should be given the opportunity to consult with AI.

When addressing sensitive topics—especially those involving health, well-being, or critical decision-making—we must always account for the human factor. This includes the possibility of medical errors, whether they stem from a lapse in memory, a moment of carelessness, or simply a doctor cutting corners under pressure. It can also involve more deliberate motivations, such as the desire to sell additional products or services, which can subtly influence recommendations. These are not necessarily signs of malice, but they are realities of human behavior that can have serious consequences.

Such mistakes can backfire, sometimes catastrophically. A wrong prescription, an overlooked interaction, or an unnecessary procedure can lead to prolonged suffering, secondary conditions, or irreversible damage. This is not unique to medicine — it applies to any field where human judgment plays a role. But in healthcare, whether for humans or animals, the stakes are particularly high.

There are certain domains where the human factor needs to be minimized or, where possible, eliminated altogether. This is not about replacing human empathy or clinical intuition, but about creating a safety net. AI, if properly trained and implemented, could serve precisely that function. It would allow for the systematic analysis of diagnoses, prescriptions, and treatment plans, cross-referencing them against authoritative data sources, clinical guidelines, and pharmaceutical databases.

Imagine being able to upload test results into an application and receiving an objective, data-driven assessment that flags potential issues — drug incompatibilities, incorrect dosages, or deviations from standard protocols. Such a tool would not replace the doctor but would empower patients and their families to ask better questions, seek second opinions with more confidence, and ultimately make more informed decisions. In a world where trust in medical institutions can be fragile, AI could become a reliable ally in safeguarding health.

Editor’s note: This story represents the views of the author of the story. The author is not affiliated with HackerNoon staff and wrote this story on their own. The HackerNoon editorial team has only verified the story for grammatical accuracy and does not condone/condemn any of the claims contained herein. If you would like us to include your version of events, please email us at [email protected]#DYOR


Written by artcollecting | Art Collecting is a global agency specializing in art management.
Published by HackerNoon on 2026/03/09