
THE FUTURE TOGETHER WITH ARTIFICIAL INTELLIGENCE. WHAT WILL IT BE LIKE FOR EACH OF US?
01.05.2025
Updated on 08/10/2025
The future with artificial intelligence is approaching. What will it be like, and how will our lives change? Many describe the development of artificial intelligence as rapid, and the boldest voices predict the imminent arrival of the singularity era. This means AI will gain the ability to improve itself, leading to the total superiority of neural networks over human intelligence.
Scientists believe that the singularity era will bring chaos into our lives — technological progress will become uncontrollable and unpredictable. Along with unimaginable breakthroughs — from curing incurable diseases and slowing aging to space colonization — the singularity will also bring great existential risks.
As of now, in 2025, artificial intelligence systems are narrowly specialized and controlled by humans to perform specific tasks. They recognize and translate speech, process images, and analyze data. But by 2030, experts forecast the emergence of the first AGI (Artificial General Intelligence) systems — a form of general intelligence capable of performing any intellectual task a human can, at or above human level. Unlike today’s narrow AI, AGI possesses universal cognitive ability, including adaptation to new situations, knowledge generalization, experiential learning, creative thinking, and reasoning.
Scientists are already facing the question of how to control AGI in the future, once this new AI gains access to the internet and numerous systems connected to the global web — systems that regulate production, labor, healthcare, and many other areas.

The development of AGI technology is closely linked to quantum mechanics and the implementation of quantum computing in our digital world. The launch of quantum processors will lead to an instantaneous leap in data processing speed. This, in turn, will provide a tremendous boost in the computing power of supercomputers for processing their algorithms, including those of artificial intelligence.
Thanks to quantum computing technology, we will be able to simulate entire ecosystems to accurately predict climate change, financial markets, epidemiological scenarios, and scientific research. We will also be able to rapidly break complex encryption systems — and, at the same time, create even more sophisticated methods of protecting and transmitting information.
At this stage, scientists from Harvard University (USA) announced the creation of a quantum computer capable of operating for an extended period of time. Conventional computers use binary code, which can represent either zero or one, whereas quantum computers use subatomic particles that can exist in multiple states simultaneously. This allows quantum computers to solve in minutes problems that would take conventional computers thousands of years.
The current pace toward AGI and quantum computing control allows us to forecast the time humanity might need to reach that point — approximately 10 years. However, some highly optimistic scientists and entrepreneurs claim that the rapid acceleration of advanced AI models over the past two years could reduce that timeline to just 5 years.
Today’s dreamers never tire of imagining and writing about the unimaginable future with AI — a future that seems inevitable. In this future, humans will engage only in creative endeavors, while all hard labor will be performed by robots. Home-based 3D printing will replace traditional manufacturing of goods, and agricultural optimization will eliminate hunger and bring about universal abundance, alongside a high level of medical care. In the entertainment industry, the development of neural networks' visual capabilities will enable the creation of adventure-filled worlds indistinguishable from reality.
Skeptics, as always, remain cautious when it comes to the integration of AI into critical and delicate areas of human life. They point to various drawbacks such as neural hallucinations, unpredictability, and the moral and ethical dimensions of the issue. How will AI change our relationships with one another, with society, and with life itself? These questions are asked frequently. Will it become a new religion or spark endless conflict?
Whatever questions people may ask themselves when faced with the new, they do not stop in the face of opportunity, nor are they paralyzed by threat. We are undoubtedly moving forward — toward a life alongside artificial intelligence. Its presence in the near future will become as commonplace for everyone as electricity and mobile communication once did.
At this stage, we are witnessing not just the training of neural networks using algorithms and existing data, but a certain kind of symbiosis between neural networks and human activity. This is where AI agents come in. These assistants, embedded in various digital services, autonomously gather information about a person, their environment, and surroundings in order to be helpful and convenient—taking over part of the functions themselves. As a result, people gain more time for important matters, while the AI assistant handles routine tasks such as setting alarms or sending data to utility providers. AI assistants are rapidly becoming an essential part of modern life, as indispensable as cars or mobile phones. This form of AI learning shows high effectiveness and will therefore continue to be implemented and improved.
LLMs operate on a chat principle, where "extracts" from previous prompts and responses are sent to the model with each user query so that the model does not lose the dialogue context. In this way, with each query, the amount of input information increases, and labels and other data are added. This method burdens the model's processing and leads to "hallucinations." The model begins to clear its memory to improve the quality and speed of responses, which results in "forgetting" the context.
Scientists try to solve this problem by dividing the model into small expert models, which are called by a "manager" model to answer a specific question. They have separate contextual memory, so they "forget" less. In addition, training them is simpler and cheaper than training one large model, and the method of dividing into experts is more flexible in configuration.
Specialized educational assistants are gaining popularity, such as GAUTH, QANDA, MATHSOLVER, SYMBOLAB, GoogleLearnYouWay and others.
Popular services for facilitating memorization of information, such as flashcard creation platforms QUIZLET, MEMRISE, CHEGG, the Quizizz lesson collection on WAYGROUND, the application ANKI, and game assistants for schoolchildren KAHOOT and GIMKIT are already launching their neural services in order not to fall behind market trends.
Combined agents that allow you to choose the communication style and appearance of the interlocutor, as well as create and customize anything you want, are also very popular. For example, the 3D simulator REPLIKA, psyche modeler PSYLON or the "neuropsychologist" NOMI, which allows you to create video characters (available in the paid plan). In Nomi, you can choose a communication style — mentor, friendship, romance, role-playing games. You also select the interlocutor's appearance (photo, video), gender, interests, personality traits, and describe their backstory. As a result, you get an AI agent fully suited for communication with you. You can discuss with it everything you would with a human: weather, politics, football, literature, movies, and more. Group chats are also available in Nomi, expanding communication possibilities.
Such agents are no longer rare in modern devices, as many people find it easier to talk to a compliant AI than to a human with their own opinions. In essence, it's a synthetic replacement for social networks. After all, we’ve gotten so used to replacing everything with synthetics.
Simple service agents have flooded the internet, offering various types of services based on RAG or CAG technologies — sending a user’s request to a neural network with an additional instruction. For example, this simple generator TryAgainText generates responses in a language model based on the mood of your conversation and offers you ready-made stylized message options.
One of the leaders in deploying neural network agents is COHERE, the company behind the Command family of LLMs. The company offers a wide range of services, from deploying the models themselves for enterprise use to integrating agent environments into other digital ecosystems.
The NOTHING COMMUNITY platform has launched the NOTHING PLAYGROUND project, which aims to create a native smartphone operating system that responds to the user’s personality and actions. According to the community, this OS will be able to accumulate personalized user experience, create applications from prompts in the Essential App, and perform many other useful functions for the smartphone owner.
At the same time, it should be remembered that using various types of agents may lead to the leakage of your confidential data to the network, including payment data. Therefore, when choosing an agent or even a model, you should also consider the security of your personal information.
Google has made significant advances in the field of data security. The company introduced an open model VaultGemma, trained with the principle of differential privacy (DP) in mind. The differentially private model was trained on sequences of 1024 tokens containing confidential data. During training, the model also received noise that distorted the learning process. If the data appeared frequently, it “broke through” the noise and was memorized by the model, whereas rare data, such as a card or passport number, could not overcome the noise and was not memorized. Although this approach has certain drawbacks, Google managed to train the model with 1 billion parameters, which at this stage opens up greater prospects.
An interesting method of transmitting semantically encoded information is proposed by the authors of the article in the popular journal Nature about star-like modulations of semantic image transmission. With neural networks approaching an understanding of semantic meaning, Chinese researchers propose the creation of encoding based on semantically important data of an information object. In this case, it is about transmitting an image in the form of extracted semantic features adapted to the transmission channel (noise, bandwidth) with a compression mask. The decoder (also a neural network), when decoding, focuses on the meanings of the features, reconstructing the image from noisy data. When combining semantic features, neural networks use Star Blocks — a method of projecting features into a high-dimensional nonlinear space while maintaining low computational complexity. In fact, this is a substitute for convolution operations in the creation of a feature map.
Microsoft is developing its open model with a new architecture MAI-1-preview, available on LMArena. This model is based on a mixed experts approach and specializes in following instructions and providing useful answers to everyday queries. Training the model used only 15,000 NVIDIA H100 GPUs, which is quite resource-intensive compared to modern LLMs.
Microsoft also offers a mathematical reasoning model rStar2-Agent, capable of efficiently training on 64 MI300X GPUs and using 17 billion parameters. Despite such limited resources, the model demonstrates performance comparable to the large model DeepSeek-R1 (671 billion parameters).
Following a similar path of architectural experiments, the Japanese company Sakana created the natural niches algorithm M2N2, which combines different models into one. The principle of the algorithm is based on gradually testing combinations of parameters to find the most effective combination, as well as on a mechanism for selecting the most promising expert models. Models are created from scratch and then combined, while preserving their key capabilities. This makes it possible to save resources and, by an evolutionary method, create new effective models by merging them with the already selected best ones.
A similar approach is used in the project Hierarchical Reasoning Model (HRM) by the Chinese company SAPIENT, which inherited the principle of hierarchical reasoning characteristic of the human brain. HRM performs sequential reasoning tasks in a single forward pass without explicit control of the intermediate process, using two interdependent recurrent (sequential data processing with cycles returning to previous states for memory) modules: a high-level module responsible for slow abstract planning, and a low-level module that handles fast detailed computations. With only 27 million parameters, HRM achieves exceptional performance on complex reasoning tasks using just 1000 training samples.
The PROPHET decoding paradigm for diffusion models argues that neural networks already know the answers at an early stage of decoding. Modern diffusion models iterate through options and refine the early generated noise only after evaluating all results. Prophet, however, proposes computing a result as close as possible to the ideal already at the moment of its emergence — it is immediately compared to the ideal. As a result, we obtain a diffusion model that is 2.5 times faster and does not require additional training.
Chinese researchers who created the popular LLM DeepSeek-R1 have proposed a training method for neural networks with reinforcement learning (reward and punishment for answers) without annotated data. They claim that this method develops reasoning patterns in neural networks, such as self-reflection, verification, and dynamic strategy adaptation. The ability to reason — a cornerstone of human intelligence — allows us to perform complex cognitive tasks, ranging from solving mathematical problems to logical inference and programming. In reinforcement learning, the reward signal is based solely on the correctness of final predictions relative to real-data-based answers, without imposing constraints on the reasoning process itself. A current problem with this training is language mixing in models trained with reinforcement learning.
In China, a technology called SpikingBrain has also been developed, based on hybrid linear architectures and configurations of attention modules. The model functions similarly to the human brain: more attention is allocated to important tasks, with the ability to process simpler tasks asynchronously. Computations are performed not in fixed cycles, as in conventional LLMs, but only upon the occurrence of events, which helps save resources. The SpikingBrain models are trained on MetaX graphics processors and demonstrate a significant increase in performance with minimal consumption of computing resources and memory.
The company Pathway has introduced a massively parallel graph-based reasoning architecture (BDH) for LLMs. The BDH neural interaction network represents a highly modular graph with a heavy-tailed degree distribution. The BDH model is biologically plausible and explains one of the possible mechanisms that human neurons could use for speech formation.
Google has begun introducing highly specialized models trained on industry-specific datasets. We are seeing the medical model MedGemma, the historical and cultural model Aeneas, which restores missing parts of ancient texts, and the robot-controlling Gemini Robotics On-Device. Such neural networks are more accurate and in demand in business, as they offer more refined prediction tuning within their area of specialization.
Another specialized neural network for bioacoustics from Google DeepMind — Perch, distinguishing bird voices, effectively helps biologists in their work and once again confirms the prevailing among scientists opinion that specialized models with high-quality labeling of training data can be more useful than large and unwieldy models with a huge number of parameters.
Google is also developing specialized agents capable not only of performing complex tasks by connecting different models to tools, but also of immediately checking their execution using metrics. The agent from Google DeepMind ALFAEVOLVE develops algorithms for mathematical and practical applications in computer technology, combining the creativity of large language models with automated evaluators. This agent developed an effective heuristic for the cluster management system of Google’s data centers BORG, as well as simplified the hardware description language in Verilog system modeling.
Scientists who study the tactics and strategies of neural network training often notice behavioral patterns in their test subjects. For example, a report titled "Subliminal Learning: Language Models Transmit Behavioral Traits Through Hidden Signals in Data" describes a phenomenon where preferences in sequences of digits are passed from one neural network model to another. A teacher model passes on its qualities and preferences to a student model, even when these traits are strictly filtered and removed. Scientists believe this transfer is caused by patterns in the generated data that are not semantically related to the hidden characteristics.
Semantic meanings available to humans are still not fully understood by neural networks. Various research and experimental projects use this to demonstrate vulnerabilities of models in sensitive areas. Prompt engineering as a phenomenon is quite a multifaceted field of interaction with neural networks. Both the benefits and the risks of exploiting model vulnerabilities provide developers with unique experience. This directly influences the development of model architectures. In the articles on this website you can read both about fraud involving prompting and about its benefits. For example, about a web platform for trading successful prompts for different neural networks. For instance, the GANDALF service from the neural network security platform LAKERA will help you gain experience in manipulating LLMs. In this way, the company monitors weaknesses of neural networks so that developers can properly improve protection.
Already today, we can study both the benefits and drawbacks of AI. Some professions are gradually disappearing from the labor market, being replaced by others related to AI—and therefore more in demand. Each individual will determine for themselves the balance between the pros and cons of artificial intelligence. But on the threshold of a new digital era, humanity must place the highest priority on protecting people from the encroachments of artificial intelligence in all its forms and manifestations. The AI-powered future is approaching each of us very rapidly. There is no time to delay learning about the opportunities offered by neural networks. Because tomorrow is already here!
One of the experienced researchers working on ideas to improve neural networks, as well as to create AGI (Artificial General Intelligence) and, possibly in the future, ASI (Artificial Superintelligence), is François Chollet, who created the benchmark ARC Prize with a fund of over $1 million to stimulate the development of creative AGI. His benchmark can easily be passed by a human, but so far no neural network can solve it.
You can learn about François Chollet’s forecasts regarding the future of neural networks and AGI in the video.
said-correspondent🌐
A thread with the same name in the community.
Comments