Stop AI Deception

DIAGNOSIS AND TREATMENT. NEURAL NETWORKS IN MODERN MEDICINE.

07.07.2025
Updated on 08/01/2026

The 2025 Nobel Prize in Medicine (Physiology) was awarded to Mary E. Brunkow, Fred Ramsdell, and Shimon Sakaguchi, who made a discovery in the functioning of the immune system. The scientists studied the mechanisms of peripheral immune tolerance, which prevents the immune system from harming the body. Their discoveries laid the foundation for a new field of research and stimulated the development of new treatments, for example, for cancer and autoimmune diseases.

Scientists refer to neural networks as Algorithmic Decision-Making systems (ADM). This is the correct term. It is still too early to call such systems artificial intelligence, although we do observe an impressive imitation of human intelligence by leading neural networks.

The academic publisher WILEY has released a report on a study of scientists’ use of neural networks, which involved about 5,000 researchers exploring how neural networks are transforming research practices around the world. The report states that although researchers are currently focused on specific areas of neural network applications, they expect a rapid expansion across all research domains. Notably, most researchers believe that neural networks already surpass humans in capabilities in more than half of the examined use cases, indicating a high level of confidence in their potential.

For example, the popular medical chatbot DOCTRONIC claims to have already helped more than 12 million people. Its genuine attention to every word of those seeking assistance is truly touching.

In science, neural networks have long been performing the role of specialized assistants, created by scientists themselves to facilitate calculations and verify results.

Digital assistants—specialized expert systems (DDSS – Diagnostic Decision Support Systems)—have been helping doctors make diagnoses for many years now. In their article “Specialized AI Expert System vs Generative Large Language Model for Clinical Diagnosis,” Mitchell J. Feldman, Edward P. Hoffer, Jared J. Conley and others described a comparison between two modern DDSS and two non-specialized LLMs. You can see the results of the comparison in the figures.

Image description
Image description

The researchers used 36 unique clinical cases that had not been previously published, which excluded the possibility of them being part of LLM training data. Each system produced a list of possible diagnoses. If the correct diagnosis appeared among the top 25, it was considered a success. The study showed that the DDSS with a complete data set more often included the correct diagnosis in the top 25 compared to the LLM, especially when laboratory results were available. However, the LLMs also demonstrated high accuracy despite not being specialized in medical topics. This underscores the potential of LLMs in diagnostics, particularly when integrated with medical data.

Medical robotics.

Meanwhile, PhD Maja Matarić and her colleagues from the University of Southern California, USA, are developing affordable social robots called Blossoms based on LLMs. These robots communicate with people, providing socio-emotional support through methods of cognitive behavioral therapy.

Image description

The university staff, in developing soft robots, are trying to create something like a pleasant pet capable not only of talking to a person but also of helping them remember the good and forget the bad. They deliver these robots to nursing homes, where social robotics can help people with dementia.

A soft robot with ChatGPT for elderly people was developed by the South Korean company HYODOL. The company’s robots synchronize with the home Internet of Things and help elderly people with household organization.

Its robotic puppy with similar functions was presented by TOMBOT at the BIOHACERSWORLD exhibition in Los Angeles. Many other interesting startups were also presented at this event, including those with neural network interfaces. Of particular interest is the startup OPTICAREAI, which analyzes the condition of the circulatory and nervous systems through the retina of the eyes.

The emotionally supportive robot toy Smart Hanhan was also released by Huawei.

Medical robots that assist in medical institutions continue their development. The American company DILIGENT ROBOTICS expands and universalizes the capabilities of its robot Moxi. These robots help medical staff in clinics in the USA and also carry out the delivery of medicines.

Various robotic systems in the field of medicine, as well as delivery robots, are developed by the American company RICHTECH ROBOTICS. The company cooperates with the developer of processors for neural networks, NVIDIA, and offers an extended range of solutions for the automation of medical services.

A subsidiary of the developer of robotic technology KUKA, the company SWISSLOG HEALTHCARE, has been working for many years in the market of medical services automation. Among its developments are logistics, storage, dosing, monitoring of medication intake, and other services. The company Swisslog Healthcare, in cooperation with the company RELAY ROBOTICS, also offers a robot for the delivery of medicines.

LIO is a mobile service robot equipped with a collaborative arm, voice control, and autonomous navigation, developed by the Swiss company F&P Robotics.

History remembers such tireless medical assistants as the Hospi-R from Panasonic, operating since 2016, and the HoLLiE, equipped with two functional manipulators.

Today, however, a new generation of humanoid robots like Unitree and others is already available for purchase in stores such as ROBOWORKS (USA) or ROBOSAVVY (UK).

The company SharpaWave has introduced the Dexterous Hand - a robotic hand with enhanced motion precision.

Development of medical neural networks.

As of today, the number of neural networks in medical science and practice has increased sharply due to the growth of computing power.

In January 2025, the leading medical journal The Lancet Digital Health published an article about the results of a study on medical neural networks over the past 10 years. The article states that it was not funded by anyone and contains a summary of hundreds of publications about neural networks in medicine.

The criteria by which such publications were selected were:

1. human patients;

2. interventions involving algorithmic decision-making systems developed using machine learning (ML);

3. outcomes describing benefits and harms to the patient that directly impact health and quality of life, such as mortality and morbidity.

Studies that were not preregistered, lacked standard-of-care controls, or related to systems assisting in performing actions (for example, in robotics) were excluded.

The search for publications was conducted using the systems of the corporate medical portal MEDLINE, the medical literature search database EMBASE, the digital library XPLORE of the public charity organization IEEE, as well as the scientific search engine GOOGLE SCHOLAR.

The studies described in the identified publications were checked for compliance of their reporting with the standards of CONSORT-AI or TRIPOD-AI.

Out of the 2,582 publications found, only 18 met the standard of randomized controlled trial (RCT).

However, the authors of the article state that none of the identified studies reported any link between adverse events in patients and the intervention involving a set of actions with participation of a neural network.

The authors emphasize that performance indicators in terms of accuracy do not guarantee clinical effectiveness, applicability, or improvement in patient care. Despite positive results in many trials, priority is most often given to diagnostic accuracy rather than meaningful clinical outcomes. This contrasts with the significant growth in approvals of medical devices based on artificial intelligence and machine learning in the USA and Europe since 2015.

The authors also point to the finding by their colleagues that most machine learning-enabled medical devices approved by the U.S. Food and Drug Administration (FDA) received approval without confirmation of effectiveness in randomized controlled trials (RCTs). According to the researchers, these data indicate a critical gap in AI development, where analytical performance is valued higher than patient-relevant outcomes in healthcare.

The above-mentioned study is valuable for its critical view on the field of neural network applications in medicine from the perspective of patient benefit.

It is hard to disagree with these researchers—medicine exists precisely to heal.

A decent analysis of neural networks in medicine and dentistry in particular was presented in this article by Enis Veseli, Mojtab Mehrabanyan, and Nur Ammarot on August 08, 2025 on the pages of the British Dental Journal (BDJ).

Nevertheless, there remain many questions between neural network developers, organizations implementing them in practice, and regulators (source).

Developers often experience a lack of high-quality, diverse, and representative data for training and validating models. They believe that current frameworks, for example those of the FDA (U.S. regulator), are not always applicable to all AI products and that a more flexible approach to evaluation and monitoring is needed.

Implementers want more clarity about who is responsible if the AI makes a mistake, and what the safety and quality requirements are. They also point out a lack of support and education for staff on AI issues, while developers often do not take into account the real working conditions of medical institutions and the needs of users.

This situation calls for the first steps toward standardization—a deep study of market demand and supply, along with the introduction of a modern library of metrics and technical standards that can be implemented in new technologies without loss of quality.

Practicing physicians note that with a large number of patient visits, the workflow in general practice is enormous. To process all of this, AI companies will require that clinical responsibility remain with doctors. This situation creates a new “invisible” cognitive load for general practitioners, arising from constant checks of neural networks for possible errors.

Neural network technology “AI SCRIBE” is currently spreading rapidly among doctors (DRAGON AMBIENT EXPERIENCE (DAX) by Nuance, ABRIDGE, SUKIAI, DEEPSCRIBE, HEIDI HEALTH, and others), with more than a hundred such offerings already on the market.

Google has released a specialized model MedASR for medical dictation and transcription. Developers can use MedASR as a base model to build efficient voice applications for healthcare.

Industry forecasts suggest that by the end of 2025, up to 30% of healthcare providers will be using some form of writing neural network technology. Writing neural networks transcribe clinical consultations in real time, creating condensed notes that can be entered into electronic medical records.

They can improve efficiency by reducing the time spent writing notes and freeing up doctors’ time for patient care. While some studies show that transcription tools may reduce documentation time by just one minute per contact, users benefit from a reduction in perceived workload. Developers of one such tool report even greater time savings—at least 90 minutes per day for general practitioners.

The United Kingdom's National Health Service (NHS) has even banned its staff from using any AI tools for ambient data recording that are not officially registered for medical use in the UK.

Due to the collection of biometric data in Brazil, the popular cryptocurrency-earning device Worldcoin ORB was banned. It uses multispectral sensors to identify individuals based on their retina. In the USA and other countries, this issue is still under discussion, while the popularity of the startup Tools for Humanity continues to grow.

To ensure the safety of advanced neural network systems for humans, neural networks must reliably operate in accordance with human values. This is the view of researchers at the Medical University of Vienna, Natalie Maria Kirch, Konstantin Hebenstrait, and Matthias Samwald. They provided the TRIAGE benchmark—a medical triage standard based on existing models of the START series medical triage. It is the process of sorting patients according to the severity of their injuries to save as many lives as possible given limited resources.

At the same time, from another perspective, the cooperation of humans with neural networks is viewed by members of the Shandong Academy of Sciences (China) Yanyan Liu, Fan Sheng and Ruyue Liu, authors of the article on the sympathy of workers from various industries toward artificial intelligence. The scientists believe that workers consider their orientation toward neural network technologies as a new experience and an opportunity for career advancement. They conducted a small study and found that the adoption of new neural network technologies contributes to a positive attitude toward work, aligned with the goals of the organization.

Google made a direct contribution to the development of medical neural networks by training one of the Gemma models on medical data. MEDGEMMA by Google is an open-source research model that analyzes medical images (X-rays, CT scans, MRIs, and histological images), summarizes clinical reports, makes diagnoses, and assesses risks.

Google DeepMind, together with Yale University (USA), has developed and presented the C2S-Scale model for studying the behavior of cancer cells. The main problem of cancer immunotherapy, according to scientists, is the so-called “cold” tumors — that is, those invisible to the body’s immune system. C2S-Scale successfully identified a new interferon-dependent enhancer, opening a potential way to transform “cold” tumors into “hot” ones and, consequently, more sensitive to immunotherapy.

A group of scientists from the University of Southern California used a long-standing computer vision method — rare event detection (RED) — in combination with liquid blood biopsy to identify anomalies. The neural network, trained on immunofluorescent microscopic images of peripheral healthy blood, detects any abnormalities. In particular, the neural network identifies tumor cells in the images, even when they are extremely rare in the blood (one in a million).

To detect pancreatic cancer at early stages on standard CT scans, a neural network PANDA from Alibaba was trained in China.

The Biomedical Data Translator consortium of the National Center for Advancing Translational Sciences announced on July 9, 2025, the first public release of the Biomedical Data Translator — a powerful open-source system based on knowledge graphs designed to integrate and harmonize extensive and complex biomedical datasets to accelerate translational science and patient treatment. The Translator integrates various types of existing data sources, including objective signs and symptoms of diseases, drug effects, and related biological data important for understanding pathophysiology. Researchers can use the NCATS Biomedical Data Translator to query specific biomedical information.

Many researchers do not have easy access to experts from different fields of science. This issue was raised by Stanford University computer science PhD Kyle Swanson and his team. They developed a solution in the form of a virtual laboratory, consisting of a lead LLM researcher who directs a team of LLM scientist-agents through a series of research meetings, while a human researcher provides high-level feedback. The first study of the virtual laboratory was the development of binding nanobodies for the latest variants of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), which causes the disease Coronavirus Disease 2019 (COVID-19).

The Dutch innovative company Lapsi Health has released an FDA-approved digital stethoscope, KEYKKU, equipped with a neural network. The battery lasts for 72 hours of continuous operation. The microphone captures vocal and background sounds using precise and safe audio recording technology.

Also, on August 6, 2025, NCATS published a technology for early diagnosis of amyotrophic lateral sclerosis (ALS), which causes paralysis, developed within the "Tissue Chips for Drug Screening" program. This is a three-dimensional cell culture model — an organ-on-a-chip. Scientists called it a spinal cord chip. The chip grows micron-sized neurons and endothelial cells (cells that line blood vessels and regulate blood flow) together in parallel chambers of a miniature chip known as a microfluidic device.

One of the best examples is HEALTHBENCH by OpenAI. HealthBench uses information from more than 260 physicians from 60 countries to develop medically grounded evaluation criteria, as well as to test AI performance in broad clinical scenarios using more than 5,000 multi-turn doctor-patient dialogues and over 48,000 rubric items. It utilizes datasets on cancer, COVID-19, cardiology, neurology, and other diseases. Its data assemblies are accessible for training your own models.

In addition to this, we can use datasets such as MIMIC-III — a large, publicly available database containing de-identified health data from over forty thousand patients who stayed in the intensive care units of the Beth Israel Deaconess Medical Center between 2001 and 2012. Also MIMIC-CXR or PADCHEST.

CHEXPERT is a large dataset of chest X-ray images and a competition for automated interpretation of these images, which includes uncertainty labels and evaluation sets with reference data annotated by radiologists.

NIH CHESTX-RAY14 DATASET (CXR8) — 112,120 images (chest X-rays), 30,805 patients, 14 diagnoses.

COVID-19 IMAGE DATA COLLECTION — a dataset of chest X-rays and CT scans of patients suspected of having COVID-19 or other viral and bacterial pneumonias.

BRAIN TUMOR SEGMENTATION CHALLENGE — datasets of multi-institutional preoperative MRI scans focusing on brain tumor segmentation. And many other useful datasets.

A large database of classifications of dosage forms, medical supplies, prescription forms, transactions between suppliers, dosage units, and related concepts is being developed by the National Council for Prescription Drug Programs (NCPDP). The NCPDP database integrates the Federal Medication Terminologies (FMT) in the United States, which represent a set of controlled terms and code sets from component vocabularies developed and maintained by the Food and Drug Administration (FDA), the U.S. National Library of Medicine, the Department of Veterans Affairs, the National Cancer Institute, and the Agency for Healthcare Research and Quality. The component terminology of the National Cancer Institute within FMT is represented in the National Cancer Institute Thesaurus (NCI).

An annotated dataset of LDCT images of lung cancer and the accompanying documentation for further use in training and testing models have been made publicly available on Zenodo by researchers from China. Each CT image was manually annotated with pixel-level accuracy along the tumor contours by a researcher with two years of experience in the field of imaging and a radiologist-oncologist with five years of work experience. The entire process can be reviewed in their article.

An annotated dataset, including 330 volumetric scans of 74 patients with various skin diseases DERMA-OCTA (Optical Coherence Tomography Angiography), has been made publicly available. The dataset contains the original 2D and 3D images, as well as versions processed by five different preprocessing methods, and reference 2D and 3D segmentations. For each version, segmentation labels are provided, created using the U-Net architecture as approaches for 2D and 3D segmentation.

A framework for training a Random Forest model to control coagulant dosing (water purifier) in treatment facilities was presented by scientists from China. At this stage, the authors report difficulties in deploying the model in cloud services due to the low speed of data acquisition. Nevertheless, they are trying to solve the architecture problem of the water data collection system and online correction of coagulant levels, taking cloud technologies into account.

Another useful case from China for the identification of early genetic diseases by facial phenotype using existing LLMs with added RAG is described by the authors of the article entitled "Graph-Augmented Large Language Models for Rare Genetic Disorders Associated with Facial Phenotype." Chinese scientists built a knowledge graph about facial phenotype based on 509 relevant publications on rare genetic diseases associated with facial phenotype and combined it with two types of RAG graphs (Cypher RAG, Vector RAG). All data, code, including benchmark datasets, are made publicly available.

Scientists from the Icahn School of Medicine at Mount Sinai (New York, New York State, USA) trained neural networks to identify the probabilities of disease development in humans on the basis of genetic mutations. General classified data can be found in the publicly accessible archive of reports on human gene variations, classified by diseases and drug response, with supporting evidence in ClinVar.

A constructive approach to accounting for and bringing all existing and future datasets to unified standards will make it possible to create a large and universal repository of humanity’s medical knowledge, on the basis of which we will be able to train a future true artificial general intelligence (AGI).

A great deal of work lies ahead, and we are confident that these efforts will make it possible to achieve results in general medicine — and in gerontology in particular — that are unimaginable today.

For example, the company Retro Biosciences, engaged in pharmaceuticals in the field of cell rejuvenation, was funded by Sam Altman, CEO of OpenAI, back in 2023. Scientists are developing a therapy that stimulates autophagy to cleanse the body’s cells of accumulated protein aggregates and damaged biomolecules.

And the UK branch of Google - DeepMind, together with researchers from the European Molecular Biology Laboratory (EMBL), developed a program based on the AlphaFold neural network that predicts protein structure from its amino acid sequence. In 2024, the program’s creators were awarded the Nobel Prize for the use of AlphaFold 2 for prediction in the field of protein folding.

The leading developer of graphics processors, NVIDIA, is expanding the capabilities of molecular design. The company has introduced a generative model called ReaSyn, designed to predict molecular synthesis pathways while also accounting for the limitations of existing approaches. Scientists emphasize that chemistry faces a challenge in predicting molecular synthesis pathways, where each pathway includes a series of intermediate stages. Predicting the pathway is a critically important step in the development of drugs, chemicals, and materials, since a molecule, no matter how promising it may be, is valuable only if it can be synthesized. ReaSyn uses a unique Chain of Reactions (CoR) notation inspired by the Chain of Thought (CoT) reasoning approach in LLMs, combined with a search algorithm during testing.

The management of molecular structure creation is actively used by leading companies' models in the development of new materials. Billions of organic molecules have been generated computationally, however, functional inorganic materials remain rare due to limited data and structural complexity, including medical compounds. Scientists from the University of Massachusetts (USA) have presented the SCIGEN framework, which provides geometric constraints - such as honeycomb and kagome lattices (made of equilateral triangles and hexagons) - in diffusion-based generative models for identifying candidates for stable quantum materials. This approach makes it possible to generate ten million inorganic compounds with Archimedean and Lieb lattices, more than 10% of which pass multi-stage stability screening.

The research laboratory Anthrogen has introduced a family of discrete diffusion protein language models called ODYSSEY, scalable from 1.2 billion to 102 billion parameters. Long-range protein effects propagate through a three-dimensional geometry constrained by the covalent structure: when residues i and j come close in space, the distance between them must match. Thus, the dependencies are many-body and locally cooperative rather than arbitrary pairwise jumps along the sequence. At the input stage, Odyssey processes proteins not merely as strings. Amino acid sequences are used as usual, while the three-dimensional shape is converted into compact structural tokens using a finite scalar quantizer (FSQ)—that is, by dividing the coordinates of protein atoms into voxel blocks, rounding them to boundary values, and creating tokens from these blocks for input to the model.

Applications using machine learning are available on the market to study pathologies in bioimages. Among them are the powerful IMAGEJ, known since the beginning of the millennium, the flexible QuPath, as well as other tools equally useful for researchers, such as CELLPROFILER, APHELION, the open-source ICY, and others, some of which can be found in Awesome-biological-image-analysis.

Those who want to learn the basics of anatomy and physiology can explore interactive 3D models of the human body on websites like ZYGOTOBODY, BIODIGITAL, VISIBLEBODY, LIFESCIENCEDB, or in the mobile app ANATOMY3DATLAS. The website PRIMALPICTURES also offers access to the data through direct contact with the site administration (registration is currently suspended). But be careful and do not self-diagnose or self-treat. Only a doctor can accurately determine the current state of an illness.

On these websites, you can work with a 3D model of a human organ. Manipulate the object and animation (click or tap the left hemisphere).

The practical side of 3D graphics in medicine has long been successfully mastered. The surgical service Medivis has developed and continues to improve augmented 3D-reality technology for surgeons. The technology digitizes surfaces using probes at four reference points and enhances the video stream with clouds of hundreds of points that form the geometry required for a procedure.

The VisAR surgical navigation system with an immersion effect from Novarad Enterprise Healthcare Solutions is also designed to assist in surgery. It combines the real-time use of CT images, ultrasound, 3D reconstruction, and reference information.

Proprio has created a system based on Volumetric Intelligence technology that generates three-dimensional images of anatomical structures and the surgical field without harmful radiation. The company also offers Paradigm — a surgical navigation program that provides critical information by combining preoperative imaging and planning with postoperative outcome data.

Mediview is implementing its augmented-reality technologies in medical institutions around the world. The company focuses on integrating systems into interventional complexes (OmnifyXR) and minimally invasive needle-insertion systems (XR90) to ensure volumetric and precise visualization.

The developer of innovative technologies for surgery and radiation therapy Brainlab has created a CT replacement — a body scanner with Loop-X technology that allows scanning the human body in a single shot or scanning an individual organ. The technology generates 2D and 3D scans for intraoperative visualization, which are then aligned with the real world.

UpSurgeOn, which applies fluorescence technologies to vascular surgery and oncological neurosurgery, has developed the Neurosurgery app for neuronavigation and fluorescence simulation.

The navigation system Zeta from Zeta Surgical enables the alignment of brain scans with surface scanning of the head to build a 3D point cloud and maintain accurate registration and head-position tracking during neurosurgical procedures. It launches very quickly, providing rapid technical and visual support for emergency brain surgeries.

The global networked medical platform NEO MEDICAL is developing the ADVISE system, which provides intraoperative visualization for spinal surgery using augmented reality powered by neural-network-based technology, helping surgeons expand the possibilities of spinal treatment.

The 3D visualization and animation studio RANDOM42 creates interactive applications that allow simulated objects to interact with the real world in augmented reality.

VIRTAMED develops simulators for training surgeons. Its simulation system combines virtual graphics with physical components (for example, an abdominal cavity simulator) and reproduces real-world scenarios for learners, recording their performance.

The augmented reality library ARCore with a cross-platform API from Google is a set of tools for motion capture, environment tracking, depth understanding, and other functions essential for AR.

The international guideline network (GIM) has proposed a set of principles for the development and use of AI-based tools or processes to support the enterprise of guideline development in healthcare. The organization's working group identified eight principles that should be followed when using artificial intelligence in the context of guidelines: transparency, pre-specification, complementarity, trustworthiness, ethics, accountability, appropriateness, and evaluation.

Also improve patient safety, help optimize supply chains, and stimulate innovation in the field of digital healthcare and sustainable medical solutions the ISO standards, which define requirements for the quality of processes and the competence of personnel.

The scientific platform ELSEVIER, together with the technology company RELX, offers the promotion of scientific publications and support for new research containing novel ideas in the field of healthcare.

This is how the scientific platform RESERCHGATE works as well.

The LAUDE institute offers support and simple grants to researchers and neural network developers. One of the conditions for providing support is the use of neural networks with a practical purpose.

The development of neural networks aimed at specific practical purposes is actively promoted by the leading South Korean research institute KAISTAI. In the institute's competition for the best work in the category "Applications and Practice," the neural network LABTOP won, performing continuous numerical forecasts for a wide range of laboratory research outcomes. The neural network supports clinical decision-making and the early detection of critical conditions.

The investment startup platform Y Combinator offers numerous funded startups in the field of healthcare , which also offer job opportunities .

Known for its military developments, the innovative cyber giant PALANTIR automates data exchange between partners in medical supply chains, creating a more proactive decision-making environment using the Foundry neural service. Palantir Foundry also tested a patient care optimization system at Tampa General Hospital, Florida, USA, which improved the performance indicators of the medical institution.

Significant automation of reinforcement learning (RL) model training processes is offered by BENCHMARK. The manufacturer of the “smart” environment develops secure sensor networks based on the HL7 medical data exchange protocol, which transmit sensor-acquired information in real time to a cloud platform for analysis and neural network training. Maximum connectivity flexibility and functionality are ensured by a universal Linux-based gateway with end-to-end data encryption.

Research in the fields of biological and artificial intelligence is united by the ALGONAUTS project. The platform aims to help experts from both domains implement brain activity algorithms. It contains pre-trained neural network models and datasets for training. It also hosts competitions in building algorithmic models for predicting brain activity based on functional magnetic resonance imaging data.

One of the interesting medical projects is BENCHLING – a cloud service for laboratory information management in the field of biotechnology and pharmaceuticals. The platform provides molecular data management: storage and analysis of DNA/RNA/protein sequences, includes BLAST algorithms, AlphaFold, CRISPR tools, and much more. It also considers supporting promising projects. It creates a digital data platform that will become the foundation for research and development of the transnational pharmaceutical giant SANOFI based on artificial intelligence and collaborates with many other global leaders in healthcare.

Similar services are also provided by LABGURU , EPIC , SCINOTE , and several others.

But what, at present, are equipment manufacturers offering us—equipment capable of independently analyzing data and making decisions?

In leading medical journals in 2025, articles have been published describing new diagnostic technologies through the implementation of ADM.

1. NEUTRALIZATION OF SNAKE VENOMS.

On January 15, 2025, a group of scientists described in an article “De novo designed proteins neutralize deadly snake venom toxins” the use of the RFdiffusion neural network based on deep learning to develop antidotes for short-chain and long-chain α-neurotoxins and cytotoxins from the 3FTx family of snake venom toxins.

This model is designed to predict the three-dimensional structure of proteins based on their amino acid sequence.

Existing antidotes cause side effects in some patients, have low efficacy against toxins with weak immunogenicity, and must be administered only in medical facilities. To overcome these limitations, there is growing interest in new approaches to treating snakebite envenomation, including the use of human recombinant antibodies, repurposed synthetic toxin inhibitors, or combinations of antibodies and inhibitors.

Scientists believe that de novo design approaches may have advantages over traditional methods of antidote development. They explored the development of binders both for individual natural toxins and for consensus toxins representing a family of toxin molecules, since binders to the latter may have broader neutralizing activity.

2. DIAGNOSIS OF BODY TISSUES.

In May 2025, the journal The Lancet Digital Health published an article titled "Estimating the volume of six body tissue types using artificial intelligence on cardiac attenuation correction CT scans to predict mortality: a multicenter study."

This study is funded by the National Heart, Lung, and Blood Institute of the U.S. National Institutes of Health (NIH).

The authors of the article use chest CT images obtained as low-dose scans as a byproduct of myocardial perfusion imaging (MPI—a radionuclide study of the heart muscle’s blood supply to identify areas of ischemia). These CT images are called CT Attenuation Correction (CTAC) scans and are usually applied only to eliminate artifacts interfering with the assessment of perfusion.

But now, on the chest CTAC scans, which are typically used only for technical correction, a trained neural network calculates the volume and density of six tissue types: bone, skeletal muscle, subcutaneous adipose tissue (SAT), intramuscular adipose tissue (IMAT), visceral adipose tissue (VAT), and epicardial adipose tissue (EAT).

The study considered only the quantitative assessment of different tissues using chest CT, but quantitative assessment of body composition with abdominal CT may potentially provide additional prognostic value. The scientists are convinced that further studies are needed to compare their approach using volumetric chest CT with existing approaches based on single-slice abdominal radiography that have been used in previous studies of body composition.

Chinese scientists used low-dose chest tomography to train a neural network to measure the size of adrenal glands included in the scanned area for the diagnosis of endocrine and other pathologies. This method will reduce patient radiation exposure during adrenal examinations.

3. DIAGNOSIS OF COGNITIVE IMPAIRMENTS (DEMENTIA).

In May 2025, the journal The Lancet Digital Health published an article titled “Development and testing of artificial intelligence models using voice biomarkers to detect cognitive impairment in community-dwelling adults: a cross-sectional study in Japan.”

The authors of the article claim that they use neural network analysis of human voice to diagnose cognitive impairments. The neural network works by extracting voice biomarkers from unstructured speech (3 minutes of free conversation) using the Wav2Vec2 model by Meta, without relying on the meaning of the words.

The model was trained on 3-minute speech samples from 1,461 elderly people in Japan (average age — 79.5 years), of whom 979 were used for training and 482 for testing. Cognitive status was assessed using the Memory Performance Index (MPI). The model demonstrated an AUC (a graph showing the relationship between cases identified by the model as positive or negative and the actual cases) of about 0.88–0.89 (where 1 means perfect accuracy and 0.5 means random choice).

4. DIAGNOSIS OF BREAST CANCER.

In June 2025, the journal The Lancet Digital Health published an article titled “Prediction of breast density from clinical ultrasound images using deep learning: a retrospective analysis.”

The authors of the article address the problem of predicting the probability of breast cancer based on data about breast density derived from clinical 2D ultrasound images.

The neural network was shown a dataset of 405,120 clinical breast ultrasound images from 14,066 women (mean age 53 years, range 18–99 years), in which each ultrasound image was assigned a reference breast density category according to the BI-RADS scale, obtained using mammography (the gold standard) in accordance with the classification of the American College of Radiology (ACR). These images were used during its training as the correct answers. During training, the network learned to "understand" which patterns in ultrasound scans correspond to high or low breast density.

As a result, the model makes it possible to predict breast density in cases where mammography is difficult or impossible, and also helps physicians assess cancer risk and plan examinations and prevention.

Scientists from India and Saudi Arabia have presented a hybrid methodology DXAIB, which integrates Convolutional Neural Networks (CNN) with a Random Forest (RF) model for more accurate detection of breast cancer. The hybrid approach employs the widely implemented XAI “SHAP”, providing healthcare professionals with comprehensive explanations and valuable information.

5. CHEST X-RAYS.

On June 18, the NEJM AI section of the medical journal The New England Journal of Medicine published an article titled “PadChest-GR: a bilingual chest radiograph dataset for image-based report generation,” in which the authors Nihil R. Sahni and Brandon Carrus announced the creation of a new dataset, PadChest-GR — a bilingual (Spanish and English) dataset of chest X-rays (CXR) based on the large-scale PadChest dataset, including more than 160,000 images obtained from 67,000 patients, with detailed annotations intended for training artificial intelligence models that automatically generate radiology reports with precise localization of pathologies on the images.

Existing datasets are insufficient for building complete justified reports because they lack spatial annotations linked to comprehensive sets of descriptive result sentences. PadChest-GR provides a valuable resource for developing and evaluating GRRG models from CXR images. (Funded by Microsoft Corporation.)

The researchers address the task of creating a dataset to train models capable not only of describing findings on X-rays in text (radiology reports) but also of showing exactly where these findings are located on the image.

In creating the dataset, 4,555 CXR studies with justified reports were used, of which 3,099 were abnormal and 1,456 were normal. In total, PadChest-GR contains 7,037 positive-result sentences and 3,422 negative-result sentences.

The researchers hope that future efforts will achieve greater diversity in results by including data from multiple institutions, thereby improving generalizability.

Indian researchers have reported notable progress in the training framework CXR-MultiTaskNet, which leverages CNR250K and other datasets to address classification and localization tasks for chest diseases within a unified approach. The general feature extraction strategy employs the ResNet50 backbone, optimized dual-task heads, and Grad-CAM–based explainability. This architecture reduces operational inefficiency caused by single-task pipelines and isolated feature learning identified in previous studies, while delivering interpretable results required for clinical deployment.

6. DIAGNOSIS OF STOMACH CANCER.

The well-known Chinese service Alibaba funded the creation of the GRAPE neural network (Gastric Cancer Risk Assessment Procedure with Artificial Intelligence), capable of diagnosing stomach cancer in cases where specialists do not succeed. An article about this was published on June 24, 2025, by the popular journal Nature in its Nature Medicine section.

Stomach cancer is the fifth most frequently diagnosed type of cancer and the fourth leading cause of cancer-related death worldwide.

The authors of the article explained that endoscopic screening of the stomach is often ineffective, and therefore medicine needs to expand the screening protocol. For this reason, they developed the Gastric Cancer Risk Assessment Procedure with Artificial Intelligence (GRAPE), using non-contrast CT and deep learning to detect the disease.

Description of the image

GRAPE was trained on 3,470 cases of patients with stomach cancer and 3,250 cases of healthy patients. It generates two types of outputs: a pixel-level segmentation mask of the stomach and tumors, and a classification score distinguishing patients with stomach cancer from patients without the disease. The model follows a two-stage approach. In the first stage, a segmentation network is used to locate the stomach within the entire CT scan, generating a segmentation mask that is then used to crop and isolate the stomach region. This cropped area is passed to the second stage, where a joint classification-segmentation network with two branches is used. The segmentation branch detects tumors within the identified stomach area, while the classification branch integrates multi-level features to classify the patient as positive or negative.

In addition, training a convolutional model ResNet50 on gastrointestinal tract image datasets KVASIR to extract disease features attempts to address the longstanding issue of computational overhead caused by processing irrelevant features. The additional integration of a new Entropy Field Propagation layer into the process improved the classification quality metrics.

7. DIAGNOSIS OF PARKINSON’S DISEASE.

On June 26, 2025, the NEJM AI section of the medical journal The New England Journal of Medicine published an article titled “Screening for Parkinson’s Disease Using Artificial Intelligence and Smile Videos.”

The authors describe the system of neural network models they developed, trained on 1,452 participants, including 391 with Parkinson’s disease — 300 of whom were clinically diagnosed, and 91 self-reported their condition. Participants used an online tool to record themselves (either at home or in clinical settings) mimicking three facial expressions (smile, disgust, and surprise). To quantitatively assess hypomimia, facial landmarks and features based on action units were extracted. Neural network models were trained on these features to distinguish individuals with and without Parkinson’s disease. The generalizability of the model was tested on external datasets (from clinics in the US and Bangladesh).

This system of models trained on smile videos achieved a diagnostic accuracy of 87.9% and 89.3% correctness in distinguishing healthy individuals from those with the disease in cross-validation. No significant gender or ethnic bias was found, except for higher accuracy for women in the Bangladeshi dataset.

8. DIAGNOSIS OF EPILEPSY.

On June 26, 2025, the NEJM AI section of the medical journal The New England Journal of Medicine published an article titled “Expert-Level Detection of Epilepsy Markers in Electroencephalography at Short and Long Time Scales.”

The authors claim to have developed SpikeNet2 — a deep learning model based on a residual (difference-learning) neural network architecture — and enhanced it with hard negative mining to reduce false positives.

The study analyzed 17,812 electroencephalography (EEG) recordings from 13,523 patients across multiple institutions, including hospitals. In total, 32,433 event-level samples annotated by experts were used for training and evaluation.

The model was trained to detect spikes — brief abnormal electrical impulses in the brain recorded in EEG graphs — to identify epilepsy foci amid electrical noise and false alarms.

The model demonstrated the following results: AUROC (ability to distinguish positives from negatives, where 0 means no discrimination and 1 means perfect discrimination) — 0.942, and AUPRC (proportion of correctly found events out of all model-identified events) — 0.948.

9. ELECTROCARDIOGRAM DECODING.

On June 26, 2025, the NEJM AI section of *The New England Journal of Medicine* published an article titled “A Foundation Electrocardiogram Model Built on More Than 10 Million Records.”

The authors introduce ECGFounder — a general-purpose foundation model for ECG analysis that leverages real cardiologist annotations to enhance diagnostic capabilities. ECGFounder is built on 10,771,552 ECGs from 1,818,247 unique patients with 150 label categories from the Harvard–Emory ECG database, enabling comprehensive cardiovascular diagnostics. The model is designed as an efficient out-of-the-box solution while remaining easily adaptable for a wide range of tasks, providing maximum usability. The researchers extended the model’s application to ECGs with fewer leads, particularly single-lead ECGs, making ECGFounder suitable for mobile and remote monitoring scenarios.

Experimental results show that ECGFounder reaches expert-level performance on internal validation sets, with AUROC scores (0 = no discrimination, 1 = perfect discrimination) exceeding 0.95 for 80 diagnoses. It also demonstrates high classification and generalization performance across various diagnoses on external validation sets. When fine-tuned, ECGFounder outperforms baseline models in demographic analysis, clinical event detection, and cross-modal rhythm diagnosis, exceeding baseline methods by 3–5 points in AUROC.

Additionally, their colleagues are working on automating echocardiogram interpretation using the PanEcho neural network.

They created 39 transthoracic echocardiography labels and measurements to train the network. The system has shown consistency, achieving high accuracy across 18 diagnostic tasks and performing 21 echocardiographic parameter measurements with normalized mean absolute error. For example: left ventricular ejection fraction estimation with an error of about 4.2–4.5%, diagnosis of moderate and severe left ventricular systolic dysfunction – 99%, detection of severe aortic stenosis – 100%.

10. DIAGNOSIS OF HYPERTROPHIC CARDIOMYOPATHY.

On July 2, 2025, the website of the major scientific publisher *Nature* published an article titled “Multimodal artificial intelligence for arrhythmic death prediction in hypertrophic cardiomyopathy.”

Sudden cardiac death from ventricular arrhythmias is a leading cause of mortality worldwide. The authors present an innovative neural network model called MAARS (Multimodal AI for ventricular Arrhythmia Risk Stratification), developed to predict and interpret the risk of sudden cardiac arrest in patients using multimodal medical data. To train the neural network, they used demographic data, medical history, symptoms, stress test results, echocardiography and radiological results, as well as contrast-enhanced cardiac MRI images that show myocardial fibrosis — a key substrate for arrhythmias.

The results are impressive: on internal testing (training data), the model achieved 89% accuracy, and on external testing (new data) — 81%. MAARS outperformed existing medical risk assessment systems for sudden cardiac death (such as ACC/AHA, ESC, HCM Risk-SCD), exceeding their accuracy by 22–35%.

11. MELANOMA DIAGNOSIS.

In an article published on July 3, 2025, titled “Assessment of Tumor-Infiltrating Lymphocytes in Melanoma by Pathologist and Artificial Intelligence,” the authors report on a study addressing whether machine learning can provide better reproducibility and prognostic value in quantifying TILs (tumor-infiltrating lymphocytes) in melanoma compared to traditional manual pathologist readings.

The study involved 98 specialists from 45 institutions who assessed 60 melanoma samples. The algorithm ANNMAR_24 demonstrated high reproducibility for all TIL variables, with intraclass correlation coefficients (ICC) above 0.90.

For comparison, traditional pathologists had an ICC of around 0.61 (for TIL percentage) and Kendall’s W of about 0.44 (using the Clark system). ANNMAR_24 showed high reproducibility (ICC > 0.9), significantly outperforming traditional methods (ICC around 0.6, Kendall’s W 0.44). High reproducibility was achieved regardless of participant qualification — among both pathologists and researchers.

12. DIAGNOSIS OF SCHIZOPHRENIA.

In 2021, most attempts to detect schizophrenia were carried out using various types of MRI. Other detection methods utilizing AI included PET, EEG, and approaches based on predicting psychophysiological abilities, as well as the classification of genes and proteins.

This is discussed by the authors of the article “Schizophrenia: A Review of Artificial Intelligence Methods Applied for Detection and Classification” in the International Journal of Environmental Research and Public Health, who analyzed scientific papers on this topic covering the period from 1999 to 2020.

Today, researchers are improving schizophrenia diagnostics because, as pointed out by the authors (the first group of Indian scientists) of the article “Enhanced Detection of Schizophrenia Using Multichannel EEG and Feature Selection Based on CAOA-RST,” traditional diagnostic methods relied on clinical interviews and subjective decisions; they turned out to be ineffective for early identification and demonstrated inconsistency.

In the article “Detection of Schizophrenia from EEG Signals Using Image Encoding Method and Deep Wrapper-Based Feature Selection,” published in the same journal, Scientific Reports from Nature, the authors (the second group of Indian scientists) emphasize that approximately 1% of the world’s population suffers from this severe mental disorder. Despite its prevalence of about 20 million people worldwide, it remains one of the most misunderstood and stigmatized conditions.

These two groups of Indian researchers chose similar strategies for converting EEG signals into images using transfer learning of convolutional models (CNN) trained on the ImageNet dataset with its 14 million images, retraining the last layers of the model for EEG spectrograms (including wavelet transformation). However, the architecture of the convolutional filters and neural network layers differs between these research groups: VGG16 and ResNet50, respectively.

The news resource Interesting Engineering also published an article on July 4, 2025, about the achievement of a team of Taiwanese researchers from Taipei Veterans General Hospital. This group of scientists presented BrainProbe—a platform based on artificial intelligence developed for objective diagnosis of schizophrenia using MRI data and deep learning.

The BrainProbe system is capable of detecting structural and functional brain changes characteristic of schizophrenia with an accuracy of up to 91.7%. Using brain scan data from more than 1,500 individuals collected over more than a decade, including both healthy people and patients with schizophrenia, the neural network was trained to detect barely noticeable early-stage changes invisible to the human eye.

13. THE "HUMAN PHENOTYPE" PROJECT AND GLUCOSE LEVEL MONITORING.

The PHENO.AI (Human Phenotype) project is a comprehensive longitudinal cohort study and biobank. The goal of the project is to identify new molecular signatures with diagnostic, prognostic, and therapeutic value, as well as to develop neural network–based predictive models for determining the onset and progression of diseases.

The Pheno.AI project is intended to continue the legendary American-British "Human Genome" project, implemented in cooperation with Celera Corporation (now QUESTDIAGNOSTICS) for sequencing the human genome, which concluded in 2006 with the finding that the final length of the human genome is 2.86 Gb.

Google is developing its unified DNA sequence model AlphaGenome, which takes 1 Mb of DNA sequence as input and predicts thousands of functional genomic tracks with a resolution of up to a single base pair across different modalities.

However, humanity continues to conduct research and collect data in the biotechnology field of medicine, advancing systems biology in various directions. Pheno.AI invites collaboration and encourages participants to upload their datasets to the repository for global researcher access. Data should be uploaded in an annotated form — as tables and dictionaries with indexing. To upload data, the project provides the Python library pheno_utils. The data can be processed conveniently in Jupyter Notebook.

To achieve its goal of deep phenotyping of over 100,000 participants worldwide, the "Human Phenotype" project collaborates with leading international research institutes and remains open to additional partners wishing to join.

A team of researchers led by Drs. Li Reichher and Smadar Shilo from the Segal Lab at the Weizmann Institute of Science, in collaboration with Pheno.AI and the Mohamed bin Zayed University of Artificial Intelligence in the United Arab Emirates, published a paper in Nature Medicine on July 15, 2025, titled "Deep phenotyping of the health–disease continuum in the Human Phenotype Project." The authors described the creation of a multimodal foundational neural network model trained using self-supervised learning on dietary and continuous glucose monitoring data, which outperforms existing methods for predicting disease onset. This model can be extended to integrate other modalities and act as a personalized digital twin.

14. DIAGNOSIS OF SYSTEMIC LUPUS ERYTHEMATOSUS.

Systemic lupus erythematosus (SLE) is a serious autoimmune disease primarily affecting women. However, screening for SLE and its related complications poses a significant global challenge due to complex diagnostic criteria and low public awareness. A group of researchers from China, Singapore, the UK, and the USA published a paper in Cell Reports Medicine on July 15, 2025, titled "Deep learning system for detection of systemic lupus erythematosus from retinal images," presenting the developed deep learning system (DeepSLE) for detecting SLE and its retinal and kidney complications from retinal images. The authors state that early detection of SLE and subsequent therapeutic interventions are crucial for increasing the likelihood of remission and improving patient prognosis. Nevertheless, SLE screening remains a global public health challenge due to the lack of widely accepted, standardized, non-invasive, and cost-effective screening tools for early disease detection, especially among asymptomatic individuals or those with mild symptoms. The neural network model was trained and validated using fundus images from diverse multiethnic datasets comprising over 254 246 images obtained from 91 598 participants in China and the UK.

15. PREDICTION OF DIABETIC RETINOPATHY (DR) PROGRESSION.

Diabetic retinopathy is a leading cause of blindness worldwide. The prognostic platform DRForecastGAN (Generative Adversarial Network Forecast Diabetic Retinopathy ForecastGAN), consisting of a generator, discriminator, and registration network, was trained by Beijing scientists, validated, and tested on training (12,852 images), internal (2,734 images), and external (8,523 images) datasets. When evaluating the results of the DRForecastGAN model, it was compared with publicly available models CycleGAN and Pix2Pix, trained on the same data using the same popular Adam weight optimization method with a weight update step of 0.0002 and additional adjustment at each step. The DRForecastGAN metrics significantly outperformed models without additional modules. Visualization of DR progression can help doctors explain possible disease trends and necessary treatment and monitoring. Furthermore, recommendations for retinal imaging for DR patients were based on the general population, which may sometimes be not entirely advantageous economically or socially for an individual. Accurate prediction can be used to optimize the frequency of follow-up retinal examinations for an individual, which may improve the efficiency of medical resource utilization. Moreover, communication based on future fundus images can help patients better understand their health condition and improve their understanding and cooperation with treatment.

16. DIAGNOSIS OF THE CAUSES OF TREMOR.

Parkinson's disease (PD) with tremor dominance and essential tremor (ET) are the two most common types of tremor, which present serious difficulties in diagnosis. Chinese scientist Moxuan Zhang with colleagues selected patients with PD with tremor predominance as representatives of parkinsonian tremor and used structural and functional MRI to analyze morphological changes in patients with PD compared with patients with ET. The final cohort included 69 patients with PD and 71 patients with ET. Machine learning was used to distinguish different types of tremor based on measurements of cortical thickness. For model training, features that repeated more than 500 times were selected. This study used five machine learning methods: random forests and support vector machines (linear, polynomial, radial basis function, and sigmoid). The goal was to develop an AI model that effectively distinguishes different types of tremor individually, using brain morphology as independent features or in combination with clinical information. In the test cohort, models trained using brain morphology variables showed similar performance to models trained using clinical variables. Model performance improved when the model was trained using brain morphology in combination with clinical variables, rather than using only a single variable. The predictive performance of the model significantly improved when using feature filtering with a smaller number of variables. This result was also confirmed on an external test set. The results showed that integrating brain morphology and clinical variables into machine learning models significantly increases the ability to distinguish tremor types at the individual level, potentially offering a more advanced diagnostic tool for clinical application.

Neural networks have also demonstrated significant success in the following areas.

- Prediction of early biochemical recurrence of prostate cancer after radical prostatectomy - XGBoost. Data from 1,024 patients were analyzed, of which 476 experienced complications and 548 did not, using 25 clinical and pathological indicators. The model achieved 84% accuracy on the main dataset and 89% on a separate group of 96 patients.

- Data collected using the mobile sensor wGT3X-BT (ActiGraph, USA) on glucose levels, accelerometer readings, and surveys (55,000 time windows of 45 minutes each) allowed training several neural networks to distinguish human states during intermittent fasting with an accuracy of 84–88%. Feature calculation was performed based on glucose level data using the Cgmquantify Python package.

- With the help of a neural network analyzed the action of the active monomer Bufalin on ERα (nuclear receptor protein of the cell nucleus, regulating cell growth and survival). It was found that Bufalin acts on ERα as a molecular glue, enhancing the interaction between ERα and the ubiquitin E3 ligase STUB1, which leads to proteasomal degradation of ERα. Studies confirm that Bufalin exerts an inhibitory effect on lung cancer, liver cancer, colorectal cancer, and glioma, inhibiting tumor growth. However, the molecular mechanisms underlying Bufalin’s antitumor activity still need to be clarified, identifying the precise antitumor targets of Bufalin for optimizing its therapeutic application.

- Scanner of dental impressions and prostheses with a neural network for creating and sending a 3D model to the dental technician is offered by MIMETRIK.

- Based on the CXR-RANet architecture, a Chinese deep learning neural network developed for analyzing chest X-ray images with the purpose of detecting lung nodules and early-stage lung cancer. It was trained on 2965 images from 1762 patients and demonstrated 93% effectiveness in distinguishing patients from healthy individuals, surpassing most existing algorithms in feature extraction and classification.

– Trained on extracted CRISPR-associated (Cas) proteins from the NCBI database, classified by genes with removal of redundant sequences, the large language model ESM AIL-Scan made it possible to bypass the old method of studying unknown Cas proteins in metagenomes, which was based on protein sequence alignment. Clustered regularly interspaced short palindromic repeats (CRISPR) and CRISPR-associated (Cas) proteins constitute an adaptive immune system in prokaryotes for protection against invasive genetic elements. During the training of the model, Cas proteins were embedded and classified using multiple labels. A model for predicting trans-cleavage activity was developed based on ESM and small-scale experimental trans-cleavage data. The trained model was applied for detection of Cas proteins and prediction of characteristics from sequences extracted from the metagenome. Protein structures were visualized using the visualization system for exploratory research and analysis UCSF Chimera.

– The H-IoT architecture using convolutional neural networks (CNN) for spatial representation, LSTM for modeling temporal sequences, and VAE for detecting hidden anomalies demonstrated high accuracy when working with various physiological signals. This method, used by Indian and American scientists, as well as their colleagues from the AOE, opens prospects for implementation in real-world healthcare monitoring systems, wearable devices, and assistive technologies in real-time. Unlike traditional systems, which perform each task separately, this architecture integrates all layers into a single pipeline deployed at the edge.

The control of Crispr gene modification is strengthened by a neural network developed by a team led by Seren Lienkamp, a professor at the University of Zurich and the Swiss Federal Institute of Technology in Zurich. It significantly increases the accuracy of genome editing. The tool “Pythia”, which uses a neural network, predicts how cells restore their DNA after it is cut by gene-editing tools such as CRISPR/Cas9.

- The deep learning neural network of the YOLOv5m video vision model family - EfficientNet-B5 demonstrated high performance (accuracy up to 95%) in screening the OV Rapid Test Kit dataset on images obtained via the OV-RDT mobile application. The model is used to process tests for cholangiocarcinoma – a common malignant disease in the Mekong River basin countries, closely associated with chronic infections caused by Opisthorchis viverrini. The model is effective at early stages of the disease, when ultrasound results cannot always be correctly classified by inexperienced medical personnel. Consumption of raw or partially cooked freshwater carp by people in these regions is the main route of infection. The northeastern region of Thailand has particularly high prevalence of opisthorchiasis: in some provinces, infection rates range from 20% to 70%, making it one of the most affected regions in the world.

– The multi-agent language model CARE-AD (Collaborative Analysis and Risk Evaluation for Alzheimer's Disease) for predicting the onset of Alzheimer's disease through the analysis of electronic health record entries was presented by scientists from Massachusetts (USA) in an article in the journal Nature. Alzheimer's disease is a progressive neurodegenerative disorder characterized by a decline in cognitive abilities, memory impairment, and functional disturbances, which ultimately lead to the loss of independence. In clinical practice, specialists in neurology, psychiatry, geriatrics, primary health care, and other related fields contribute to a comprehensive risk assessment for the patient. The authors of the article propose to model this clinical procedure using a multi-agent structure, where each agent represents a specialized field. CARE-AD simulates a virtual interdisciplinary consultation: agents representing such clinical areas as primary health care, neurology, psychiatry, geriatrics, and psychology analyze the dynamics of the patient's symptoms and provide assessments specific to that field. These assessments are then synthesized by the agent specializing in Alzheimer's disease into an individual risk prediction. By modeling temporal patterns of symptoms and incorporating various clinical perspectives, CARE-AD seeks to increase sensitivity to early signs associated with Alzheimer's disease, especially those that are often insufficiently represented in structured records, while at the same time improving interpretability through agent-specific data that can be analyzed by physicians.

– The staff of the emergency medicine department of the Mayo Clinic in Florida (USA) created a system for accurate prediction of bacteriuria using only data that are easily available during a patient’s visit to the emergency department. Urinary tract infections are among the most common bacterial infections, however, they are often misdiagnosed and improperly treated. The scientists described the method of creating such a model in an article, where they explained that they used a logistic regression classifier, the k-nearest neighbors method, a random forest classifier, extreme gradient boosting (XGBoost – each subsequent model is trained on the residuals (errors) of the previous models), and a deep neural network to determine how well they predicted 3 urine culture outcomes. XGBoost demonstrated the highest area under the receiver operating characteristic curve (AUROC) for all evaluated outcomes. These studies indicate that machine learning algorithms may become valuable tools in clinical settings, helping to predict culture results and to make decisions about initiating empirical antibacterial therapy.

– A group of scientists from India and the UAE developed a combined hybrid architecture for scalable high-performance neural iteration (CHASHNIt), which represents an integration of EfficientNetB7, DenseNet201, and InceptionResNetV2, and surpasses existing models in all metrics. CHASHNIt is an advanced automated system for skin disease classification, achieving a balance between scalability, accuracy, and explainability. The only drawback is the computational complexity, but future developments will allow optimization of efficiency for devices with low resource consumption. The aim of the study is to overcome the limitations of insufficient sampling and the absence in classifications of rare but clinically significant skin conditions through the use of a hybrid architecture and the implementation of a reliable image classification system designed for the accurate differentiation of 23 classes of skin diseases.

The use of recurrent (cyclic), convolutional (filtering), and attention-based architectures (including transformers) shows promise in predicting epitopes for vaccine development. An epitope is a specific region of an antigen recognized by the immune system. B-cell epitopes are protein regions bound by antibodies, whereas T-cell epitopes are short peptides presented on MHC molecules and recognized by T-cell receptors.

The research lab Chai Discovery has released the neural network Chai, which can accelerate the design of full-size monoclonal antibodies with therapeutic properties and high performance. Structural accuracy underpins Chai-2’s ability to design antibodies against a specified epitope, which can provide the required functions.

Lie detection is performed using a combination of neural networks under the LieXBerta framework (Lie Detection + XGBoost + RoBERTa). Courtroom data were manually annotated into 10 categories. These data were then used to fine-tune the LLMS (RoBERTa) model to classify emotions. The fine-tuned RoBERTa produced emotions (fear, anger, joy, neutrality, etc.) in vector form. The resulting vectors were combined with features of facial expressions and actions (gestures, behavior) and fed into the XGBoost model (a decision-tree-based method for error correction). XGBoost significantly improved the accuracy of lie detection when incorporating human emotion data. A study by Japanese researchers on the structure of human emotions and their recognition using LLM can be read here.

The Cascade RCNN neural network (Convolutional Neural Network, CNN) together with graph neural networks GINet (Graph Neural Network) and Knowledge Graphs (KG) are used by Chinese researchers for the diagnosis of papillary thyroid carcinoma (PTC). The process consists of: 1. Extracting and classifying cellular features; 2. Classifying cytological features and modeling dependencies between cells; 3. Extracting visual features and constructing a visual graph. The overall classification accuracy is 88.84%.

The neural network Evo 2, pre-trained on large corpora of DNA sequences, including over two million bacteriophage genomes, has a basic ability to generate new phage-like sequences. Scientists at Stanford University (USA) used special taxonomic sequence labels during training, which were included along with genomic sequences during pre-training, to specifically guide the model to generate phage-like sequences using three prompts corresponding to major viral realms: Duplodnaviria (double-stranded DNA viruses), Monodnaviria (single-stranded DNA viruses), and Riboviria (RNA viruses). The virus classification tool geNomad classified 34–38% of the sequences generated by the model as viral.

Japanese developers from SakanaAI have created the ASAL framework ("Automated Search for Artificial Life"), based on a foundation model, for detecting dynamic self-organizing patterns resembling real cells. The ASAL system can identify the emergence of life in places where humans might not notice it. At the same time, unlike biology, this system is aimed at studying the general properties of all life rather than only its specific manifestations.

Daniel Recker, Associate Professor of Biomedical Engineering at Duke University (North Carolina, USA), together with his team, developed a platform that combines automated laboratory methods with a neural network to create nanoparticles for drug delivery. The platform helps determine the optimal ratio of mixture components to form a stable therapeutic molecule capable of delivering a drug into a cell. The researchers claim that the technology can enable the delivery of hard-to-encapsulate leukemia therapy by optimizing the design of a second anticancer nanoparticle.

The problem of controlling the construction of an organism is also being addressed by a group at the Gene Center of Ludwig Maximilian University (LMU) and the Max Planck Institute of Biochemistry (MPI) with the support of the DFG Emmy Noether program, working at the intersection of de novo protein design, deep learning, and fundamental biophysics of protein functions. Meanwhile, Swedish scientists at the University of Gothenburg are working on creating microscopic mechanisms, using optical metasurfaces for local control of them. Such machines can be manufactured using standard lithography methods and seamlessly integrated into a crystal, reaching sizes of up to tens of micrometers and providing movement accuracy at the submicrometer scale.

Based on the Skinned Multi-Person Linear Model (SMPL) technology for creating a 3D human model using a neural network trained on images, researchers developed a system for reproducing fetal shape and pose in prenatal diagnostics.

The journal Science Robotics also regularly publishes advances in robotic tools controlled by neural networks. In the field of surgery, impressive results have been achieved in the implementation of Foundation neural models capable of recognizing large volumes of images (capturing and processing high-quality video sequences in real time) and performing the function of unprecedentedly precise navigation for autonomously carrying out surgical operations and procedures such as needle insertion.

Autonomous surgeries are already producing good results. Back in 2022, researcher Axel Krieger and his team of colleagues began training models to recognize anatomy using PGB-D (“red-green-blue-depth”) images. In July 2025, their trained model, controlling a robot, performed an elaborate 17-step gallbladder removal procedure on a pig eight times, several times autonomously correcting its actions and responding to voice commands, demonstrating adaptability even in unforeseen situations.

With the development of high-speed internet, such operations will be possible to control remotely. Chinese scientists, together with American colleagues, have created signal sources for full-channel wireless communication, operating on the basis of broadband tunable optoelectronic generators in a record-wide frequency range from 0.5 GHz to 115 GHz with high frequency stability and robust coherence (resistance to interference). In fact, the sixth generation (6G) of wireless communication will further increase data transmission speed and reduce latency, enabling resource-intensive services such as extended reality (XR) and remote surgery. Together with the new optical fiber, which transmits light through air in hollow glass tubes rather than through solid glass, data transmission may significantly accelerate.

Scientists at Columbia University (USA) have developed microcombs for separating or multiplexing frequencies in telecommunications. The "teeth" of these devices resemble a comb and generate sets of optical frequencies that serve as stable carrier streams of information in optical fiber, each within a 200 kHz range. This technology significantly increases the amount of data transmitted through fiber-optic networks.

Scientists from the University of Montreal have trained a neural network to perform effective diagnosis of attention deficit hyperactivity disorder (ADHD). The researchers found that people with ADHD and healthy individuals show differences in brain oscillations - oscillatory processes of neural activity, especially when perceiving visual stimuli. The participants were shown 24 frames of words (each lasting 200 milliseconds) overlaid with visual white noise. The signal-to-noise ratio randomly changed every millisecond according to a specific sinusoidal function. For each participant, a perception map of these words was constructed, comparing the temporal profiles of noise and image combinations during correct and incorrect responses. Based on the spectral features of these maps of patients and healthy subjects, the neural network was trained to distinguish between them with high accuracy - over 90%.

Scientists from the Department of Obstetrics and Gynecology at the Columbia University Medical Center (New York State, USA) carried out treatment of male infertility using the STAR neural network search system. The system integrates three main components: a high-speed imaging system, a specially designed Fusion DTx microfluidic chip, and a neural network object detection model based on deep learning. Together, these components provide continuous real-time analysis of sperm samples at a rate of 400 μL per hour and an image capture and processing speed of 1.1 million images per hour.

In addition to these achievements, there are other important developments.

Brain-computer.

A deep brain penetration device allowing a person without limbs to play video games is one of several "brain-computer interfaces" currently being tested on humans in China, reports *Nature* magazine. The BCI system, developed by the medical technology company StairMed in Shanghai, China, resembles the implants being tested on humans by Neuralink, a company owned by Elon Musk and based in Fremont, California. The StairMed device has fewer probes than the Neuralink device, but it is also less invasive.

According to Bloomberg and the Focused Ultrasound Foundation, at least 20 companies are currently working on ultrasound technologies for brain scanning and modulation. One of the non-invasive ultrasound approaches using neural networks is Sunmai’s invention — Transcranial Focused Ultrasound (tFUS), a helmet that applies low-intensity ultrasound to specific brain regions for diagnostic imaging and modulation. Some researchers consider FUS as an alternative to surgical intervention. Data obtained by scientists during experiments in previous years are used for training and testing models. For example, in a publication dated September 23, 2023, in the Journal of Neurosurgery, a group of researchers led by neurosurgeon Hao Tan and neurologist Angelica K. Polk describes the collection of data from the cerebral cortex of surgical patients using electrocorticographic grids — grids with electrodes that record brain signals during craniotomy (skull opening). In this way, data were obtained from 82 patients.

Promising claims in the field of non-invasive brain activity scanning are made by the startup Alterego. The developers claim that the Silent Sense sensors will allow Alterego to understand what a person wants to say without uttering a single word. Let’s hope it really works that way.

In the field of brain-computer interface technologies, the global manufacturer of wearable devices, Samsung Electronics, is also making progress. The company’s wearable devices will soon make it possible to detect and monitor left ventricular systolic dysfunction at an early stage — a serious cardiovascular disease that causes about 50% of all cases of heart failure. Also, together with the developer of neural network-based medical platforms MEDICALAI, Samsung Electronics has developed ear devices Ear-EEG — a relatively new form of electroencephalogram that uses electrodes placed around the ears or inside the ear canal.

A group of microbiologists and electronics technicians from the University of Massachusetts (USA) has presented a device capable of imitating the functions of brain neurons. The device corresponds to biological neurons in key parameters such as signal amplitude, pulse energy, temporal characteristics, and frequency response. The artificial neuron can connect to a biological cell to process cellular signals in real time and interpret its states. These results expand the possibilities for creating bio-emulated electronics to improve the bioelectronic interface and neuromorphic integration.

Scientists at the University of Sydney have discovered that the brain’s analgesic (pain-relieving) responses have spatial localization. This means that our brain is capable of mediating selective pain control in specific areas of the body.

The startup from Sunmai claims to compete with well-known minimally invasive projects such as SYNCHRON, PRECISION, PARADROMICS, and others, including the announced OpenAI-related project Merge Labs.

The Merge Labs project itself is based on a non-invasive method of interaction with the brain. The head of OpenAI, Sam Altman, and the founder of the anti-fake startup Worldcoin, German physicist Alex Blania, who are working on the project, claim that low invasiveness combined with high efficiency is the main goal of the Merge Labs project. The project is headed by biomolecular engineer Mikhail Shapiro.

For more than ten years, researchers have been able to accurately predict what a person sees or hears by using brain activity recorded with functional magnetic resonance imaging. A non-invasive technology called “mental subtitles,” which makes it possible to convert thoughts into sentences using this technology and a neural network, is described in an article in Science Advances.

Описание изображения

The project PIRAMIDAL, supported by the Y Combinator startup platform, is also moving in the direction of brain research. In collaboration with the Cleveland Clinic, the project trains a neural network on EEG data collected from tens of thousands of patients. The neural network is being trained to detect anomalies in real time.

Associate Professor of Neurosurgery at Stanford University Francis Willett with colleagues use brain-computer interfaces to help people whose paralysis deprives them of the ability to speak. The scientists use microscopic arrays of microelectrodes (each smaller than a pea), surgically implanted into the surface of the brain, to record patterns of neural activity directly in the brain. These signals are then transmitted by cable to a computer algorithm, which transforms them into actions such as speech or cursor movement. You can read how it works and watch the video here.

Stanford researchers also trained the neural network SleepFM on many hours of polysomnography data to enable disease prediction.

Significant in the field of brain research are the clinical studies of the Department of Psychiatry and Behavioral Sciences at the University of Minnesota (USA). Of particular interest is the unique experience of Professor Ziad Nahas in the field of functional neuroimaging and brain stimulation using various methods — transcranial magnetic stimulation (TMS), vagus nerve stimulation (VNS), epidural prefrontal cortex stimulation (EpCS), deep brain stimulation (DBS), electroconvulsive therapy (ECT), and focal electroconvulsive therapy (FEAST).

The development of mechanisms for transmitting sensations from humans to machines and back is being carried out by scientists of the Chinese Academy of Sciences in the field of perception systems that imitate the functions of biological nervous systems. The researchers discovered that the restriction of ions within layered graphene oxide membranes can be used to create a memristive device (which remembers the amount of current passing through it), capable of simultaneously performing synaptic functions and chemical sensory perception. By developing a method for slowing ion transport and inducing their memristive behavior, the scientists created a nanofluidic device that connects to reservoir computing algorithms (of a specially trained neural network) and is capable of classifying sweet, salty, bitter, and sour tastes.

A research group from the Chinese Academy of Sciences, in collaboration with Huashan Hospital affiliated with Fudan University and its associated enterprises, successfully completed the second clinical trial of an invasive brain-computer interface. The team used a high-performance wireless invasive brain-computer interface system WRS01, which enabled a paralyzed patient to reliably control an intelligent wheelchair and a robotic dog via electroencephalography (EEG), providing autonomous movement and object search in real-world conditions.

Biological Computers

By studying the human brain, which consumes an average of 17 kilocalories per hour, scientists are not only developing ultra-fast electronic computing systems based on it, but are also attempting to recreate the environment of biological computation itself.

The company CORTICALLABS has developed a technology for connecting biological tissue with silicon. The company’s researchers claim that the use of biological neurons will make it possible to study the criteria of their behavior in detail and will provide unique experience in the treatment of many diseases.

The Biological Black Box system is positioned as a biological network consisting of hundreds of thousands of grown living neurons used to optimize the training of artificial neural networks.

The Swiss project FINALSPARK was created to develop a biological chip. The laboratory is developing a process for creating neurons that then grow into clusters called organoids, which in turn can be attached to electrodes - after which they can be used as mini-computers.

The Unconventional AI project is developing a nonlinear silicon structure for computation that imitates the physical principles of the human brain. The strategy is to create a processor capable of performing computations through its own physical dynamics rather than step-by-step digital simulation with clock-based control.

Glaucoma

A comparison of the performance of neural networks in diagnosing glaucoma was announced on June 17, 2025, in the Cochrane Library by a group of researchers led by Kalyan Vemulapalli, a research fellow at Moorfields Eye Hospital in London, who provided a detailed description of the protocol for future studies.

Glaucoma is a pressing issue for humanity. Approximately 10% of the 70 million people with glaucoma are blind in both eyes, making it the leading cause of irreversible blindness in the world. Glaucoma is classified into two main categories: open-angle and angle-closure. Both types can occur without a known underlying cause — referred to as primary glaucoma. Secondary glaucoma can result from trauma, medication use, inflammation, tumors, and other conditions (e.g., pigment dispersion or pseudoexfoliation). All we can do is wish these scientists good luck.

Dr. Shafi Balal, together with a team of scientists from the Moorfields Eye Hospital (London), presented at the 43rd Congress of the European Society of Cataract and Refractive Surgeons (ESCRS) a diagnostic method for detecting keratoconus, which occurs in 1 out of 350 people. The researchers used a neural network to analyze 36,673 OCT images of 6,684 different patients, as well as other patient data. The neural network algorithm was able to accurately predict whether a patient’s condition would worsen or remain stable, using images and data from the first visit only. By using the neural network, the researchers were able to classify two-thirds of patients as low-risk, who did not require treatment, and the remaining third as high-risk, who required urgent cross-linking therapy.

Cataract treatment is performed by the POLARIS system, developed by Horizon. The system combines a neural network and microsurgical robotics specifically designed for the complex structures of the human eye.

Neuralink’s partner company, SCIENCE, has given a second life to the project of the French company Pixium Vision SA, called PRIMA, which had lost its funding. Originally conceived as an alternative to the radio-wave bionic eye Argus II developed by Second Sight — a company that went bankrupt in 2022 and was later acquired by Nano Precision Medical (now VIVANI) — the project had been on the verge of closure, but SCIENCE decided to continue the research. Now, the PRIMA project is working on an ocular implant with a cellular structure that converts infrared light emitted by special glasses into signals that activate retinal neurons. Vision improvement does not occur immediately, and patients have to make an effort to learn how to recognize letters and numbers using such a device. However, studies show a high rate of recovery. Among 32 participants in the experiment, vision improvement was recorded in 80% after 12 months of using the technology.

More discoveries

The journal Springer Nature published the results of a study conducted using a neural network on 336 images containing 206 fractures, identified by the unanimous opinion of two radiologists in 48 patients (average age — 12 years) suffering from osteogenesis imperfecta (a genetic disorder characterized by bone fragility and an increased risk of fractures) at Great Ormond Street Hospital for Children in London. Overall, radiologists achieved better results in diagnosing fractures than those who used artificial intelligence alone: the accuracy per study was 83.4% [95% CI: 75.2%, 89.8%] versus 74.8% [95% CI: 65.4%, 82.7%], respectively. However, radiologists who additionally used artificial intelligence demonstrated a significant improvement in results compared to those working independently: the average accuracy per study increased by 7.3% to 90.7% [95% CI: 83.5%, 95.4%].

The journal *Nature India* reports that a machine learning tool can accurately identify enzymes capable of breaking down a specific pollutant after screening millions of bacterial enzymes. The neural network "XenoBug" can identify enzymes that are potentially capable of degrading pesticides, plastics, petroleum processing by-products, and pharmaceutical waste. Its developers from the Indian Institute of Science Education and Research in Bhopal claim that it also reveals interactions between microbes and pollutants in different environments and tracks degradation pathways.

A neural network that does not require significant computational power for diagnosing plant diseases was developed by scientists from India based on hybrid deep learning technology and Grad-CAM and Grad-CAM++ technologies (low-resolution class activation maps for highlighting important areas of an image without loss of focus).

Australian bioengineers Alexander J. Cole, Christopher E. Denes, and others have developed a method called PROTein Evolution Using Selection (PROTEUS) – a platform that uses chimeric virus-like vesicles to enable long-term directed evolution campaigns in mammals without compromising system integrity. This platform is stable and capable of generating sufficient diversity for directed evolution in mammalian systems. In fact, they successfully tested a bioengineering system capable of accelerating protein evolution directly inside mammalian cells, allowing scientists to create and select new versions of proteins that work better or possess desired properties – all without leaving the living cell. Previously, this was done in test tubes and bacteria, which significantly reduced control over the outcome.

The Seattle Institute for Systems Biology (ISB) on June 26, 2025 announced the launch of the generative neural network model TARPON, developed by the institute’s director, Dr. James Heath, and researcher Daniel Chen. Trained on more than one million sequences of human T-cell receptors (TCR), TARPON reveals the fundamental “sets of rules” that govern the development and functioning of the immune system, especially at the early stages of fetal development — in the thymus, where immune cells first learn to distinguish “self” from “non-self”. The model analyzes the hypervariable region of T-cell receptors — unique protein sequences that allow T-cells to recognize foreign cells — and maps them into a 42-dimensional space. Based on this, TARPON can generate completely new, realistic T-cell receptors, showing how the immune system may respond to viruses, cancer cells, and even new pathogens. The name coincides with that of an automated telomere analysis pipeline (the terminal segments of chromosomes) with a graphical interface adapted for nanopore sequencingTARPON, created in the laboratory of Dr. Bauman and funded, among others, by the Chan-Zuckerberg Initiative and the Sergey Brin Family Foundation.

Health Insurance in the USA.

In the field of health insurance in 2025, the use of neural networks is nothing new—especially in the United States. A survey conducted across 16 U.S. states revealed that 84% of health insurance companies currently use artificial intelligence and machine learning to some extent. These tools are applied across various health insurance product lines, including individual and group health plans, as well as student health insurance programs.

One of the problems of such insurance in the United States is the requirement that doctors obtain approval for payment from the insurance company before performing a medical procedure covered by the insurance. The author of the article titled "The Pain of Prior Authorizations: Consequences of the De‑Prioritization of Human Life in Favor of Cost Containment" Ajita Hanel writes that in many cases, treatment does not happen or is delayed due to the need to obtain such approval.

The government is trying to regulate this area. For example, in the state of Colorado (USA), a regulation was issued in 2023 aimed at preventing discrimination in insurance companies’ decisions on covering medical procedures. Other states have also adopted a number of resolutions and laws regulating the use of artificial intelligence algorithms.

The federal MEDICARE law currently in effect in the United States and the federal Medicare Advantage program, which provide health insurance for individuals aged 65 and older as well as for people with disabilities, contain the principle of federal preemption — the supremacy of federal law over state laws. This defines an almost identical pattern in how the justice system resolves claims brought by insurance company clients.

Although insurance companies claim they use neural network algorithms to better predict risks and determine the need for treatment, they do not publish the algorithms themselves. This raises concerns among some journalists who believe that insurance companies use these algorithms as a tool to deny payouts in order to reduce costs. For example, the authors of the article "AI Denies Care: How Medicare Advantage Plans Use Algorithms to Cut Off Help for the Elderly in Need" (2023), Casey Ross and Bob Herman, argue that insurers use unregulated predictive algorithms under the guise of scientific rigor to pinpoint the moment when it becomes justifiable to stop paying for an elderly patient’s treatment.

Law professor and research fellow at the O'Neill Institute for National and Global Health Law, Jennifer D. Oliva, writes in her article on the SSRN (Social Science Research Network) that, unlike clinical algorithms used by medical institutions and healthcare providers for diagnosing and treating patients, insurance coverage algorithms are unregulated and therefore are not subject to safety and efficacy review by the U.S. Food and Drug Administration (FDA) prior to being released to the market.

Remote Medical Care.

Virtual online healthcare services are also rushing to integrate neural networks into their operations.

U.S.-based 24/7 virtual care platforms like CEDARS-SINAI CONNECT and K HEALTH have launched neural network–powered virtual care applications. They claim this has improved the efficiency of delivering online medical care.

Other medical platforms also operate using neural network models, including HACKENSACK MERIDIAN HEALTH (in partnership with K Health), TELADOC HEALTH, EMED HEALTHCAREUK (formerly Babylon Health), CURAI HEALTH, YOUR.MD (HEALTHILY), DOXY.ME with its video call service DOXIMITY DIALER, and others.

Researchers propose the creation of medical neural network agents, functioning like doctors for medical consultations, based on the integration of a dialogue component, a memory component, and a processing component that generates reports from the collected information. To train the agent and evaluate the results, they used a dataset of real medical dialogues and expert physician assessment.

As we can see, neural networks in medicine have taken a solid step toward a future with artificial intelligence. We can only wish them fewer obstacles along this long and complex road. And to all of us — health and longevity.

Stay calm, mindful, and take care of yourself.

Take the SAID test to once again make sure that AI is not capable of deceiving us.

said-correspondent🌐

You can create a separate thread on the community forum.

Comments