The artificial intelligence of the Yva.ai platform uses NLP (Natural Language Processing) to recognize the presence of tasks, positive, negative or conflicts in communications between employees. This analysis helps to identify the causes of employee burnout and help employees to cope with the difficulties that have arisen.

The Yva.ai platform uses machine learning to process information, including textual information presented in multiple languages. A key tool for analyzing natural language text is a family of deep neural network-based classifiers. These networks perform primary analysis of message flows inside a company using the Yva.ai platform.

The information contained in messages is extremely sensitive to the business of our customers. With this awareness, we deliberately designed Yva.ai in such a way that all text analytics are implemented as a classification. The platform has a task: to understand whether the text contains a set of high-level features that can give useful information about the performance of individual employee of the company, about their contribution to a healthy atmosphere in the team, about the actual (not nominal) place of the employee in the business processes of the enterprise.

Nowadays, e-mailing systems use machine learning to detect unwanted messages (spam). The result of such analysis is a mark that is added to the message header and signals the nature of the content without being specific. The mark can't help you to judge what kind of product is being advertised, how much it costs and whether it is advertising at all.

Yva.ai classifiers work on the same principles. They mark the message with a set of markers which allows to make only general conclusions about the information contained in the text.

Examples of the Yva.ai platform markers:

  • tone markers: positive, negative, neutral;

  • task presence: task, no task;

  • conflict signs presence: conflict, no conflict;

  • signs of praise/appreciation for a job well done: reward, no_reward, and others.

How does Yva.ai work with natural language?

Yva.ai is designed in such a way that messages are not stored in the platform, so they cannot be read or stolen, they only exist in the information systems of the customer's company. It is absolutely safe to trust Yva.ai to read the text.

We have developed and patented a two-step procedure for working with text. Thanks to it, we can use already trained classifiers or train new ones even without the text itself.

The algorithm allows text transformation taking into consideration two important points:

  1. On the one hand, the algorithm excludes the possibility of restoring the original text content, the presence of specific entities (names, locations, account numbers, addresses) and other identifying information in it;

  2. On the other hand, the algorithm allows preservation of the conceptual text content in the so-called artificial form as a multidimensional vector. This very thing makes it possible to apply classifiers based on deep neural networks.

Recently, we have seen the appearance of powerful methods for text information analysis based on deep neural networks. Especially interesting are the Unsupervised methods, which do not require significant and time-consuming manual text preprocessing. For the practical implementation of such a process, you first need to choose some artificial task for a future neural network. By solving this task, the neural network will "learn" the features of the data, their structure and will be able to use this knowledge to solve applied tasks.

An example of such an artificial task can be information compression. First, one part of the neural network learns to transform the original signal (such as a text document) into a compact form - as a vector with a small number of figures. Then another part of the neural network learns to reconstruct the original signal from the resulting compressed representation.

A typical neural network architecture applicable for this kind of tasks is called an Auto Encoder. It consists of two parts: Encoder and Decoder.

Encoder implements the function:

E: X^z,

where X is the original signal, z is the compact vector representation of the signal.

Decoder implements the function:

D: z^X',

where X 'is the retrieved signal.

The neural network is trained to minimize the difference between the original and the retrieved signal, for example, in the following form:

argmin E,D||X-X'||

Some may know this scheme as one of the possible implementations of the information bottleneck method from information theory. The essence of this method is that with just data samples (text, in our case), we can run an algorithm that will give us a trained neural network that can represent text as an artificial representation.

The encoder of the neural networks obtained this way in the process of learning to compress information becomes a specialist in how these data are arranged, what patterns they have, what kind of structure they have, etc. We can say that Encoder is an archiver that converts text X into a digital z archive, while Decoder is an unpacker. But, unlike the usual archiving software such as ZIP, where we can get our data back in its original form, in the case of neural networks, there is no way to restore the original data. The digital representation's capacity is not enough to store the full version, plus, the decoding process is formalized as probabilistic, not giving an exact result. Instead of the exact original form, only some generalized features are preserved.

How is it related to the Yva.ai algorithm?

Yva.ai does not store the text itself. It stores the vector of numbers z obtained as a result of digitization by the neural network. Moreover, to eliminate the use of conformity between a specific text and a specific vector, we store the vector with random small amplitude noise added to it.

Thus, instead of the text itself, we store high-level features extracted from it, and in such a form that it is impossible to decode them explicitly. And the information contained in such a representation is sufficient to solve those applied tasks that were not even planned at the time of training the encoder. For example, having an encoder and some model collection of manually marked texts (for example, with the marks "Sport", "Weather", "Scientific article", "Conflict", etc.), we can train another model at any time, for which the representation vector will be the input, and the target task will be the correct definition of the category or the classification.

If, in the process of training the Auto-Encoder model, we slightly modify the task and teach it to unpack z not into the source text, let's say, in the Russian language, but into the same text, but translated into many other languages, we will get a model that displays the text in a form, not depending on the language in which this text was created! A classifier trained on such a language-invariant text representation turns out to be invariant both to the language of the teaching corpus and to the language of the text that is being classified.

Using the possibilities of deep neural networks, Yva.ai performs primary analysis of message flows inside the company. At the same time, it does not store the original text format, it examines it for markers and correlates the markers with one of the classifications available for it.

What classifications does the Yva.ai platform have?

Positive and negative sentiment

In the context of a company's business processes, the "Message sentiment" assessment helps to determine the health of an entire organization. And it is only the informative part of messages that is being analyzed. It means that all greetings and courtesies are not considered as factors affecting the positive or negative sentiment of the message as a whole.

If there is both negative and positive in one sentence, Yva.ai considers the entire sentence to be negative.

Examples of negative:

  • The deadline is approaching, but nothing is ready.

  • You didn't answer the request.

  • You exceeded the term of contact payment.

Examples of positive:

  • I am up for communication, for me it is always interesting to talk to colleagues.

  • In general, the expressions are good, we want to cooperate.

  • I hope we will go on at this pace and finish everything timely and on a high level.

Praise

Praising an employee in a message's informative part is most often aimed at expressing, in an explicit or implicit form, gratitude for a job well done.

Praise is a complex action. It is part of the positive subset and serves as an assessment of achievements, a powerful motivational driver and a feedback form at the same time.

Praise is a marker that reflects respect for the employee's personality and the work done by him, impartiality, lack of arrogance on the management's part. And on the subordinates' part, this is an indicator of the level of work comfort and satisfaction with the company's top management. A praise can be expressed to a group of employees or the whole company. And then it is a significant marker of healthy team environment.

Examples of praise:

  • We are the best team and our sepulcas will conquer the market!

  • Jason, you're a great sysop, and a great devops as well.

  • Thank you for the presentation! Everyone listened with an absorbed attention.

Conflicts. Doesn't Yva.ai mix up conflicts with sarcasm?

The negative subset reflects the complex essence of processes that are harmful from the business point of view. They are of a very diverse nature.

A conflict can be interpersonal when two people are in a state of personal antipathy. Or of a group kind, when, for example, different company departments cannot align team work.

In the text, a conflict can be expressed in a non-trivial way. It is important to separate it from sarcasm, jokes, etc.

How does Yva.ai deal with such a complex topic of determination whether there is a conflict in the text?

The answer is very simple. Neural networks and their capabilities are a reflection of the data on which they were trained. The neural networks of Yva.ai classifiers are trained on large volumes of texts, which are manually marked by specialists. We made sure that the content of such texts reflects all aspects of human communication through text messaging as fully as possible.

An example of letter where conflict is defined:

"John, Jason doesn't hear you. His actions are aimed at FIRM elimination. I've talked with Kate several times this week. The situation is serious.

Having full access to all the data of the Company, Jason turned out to be incompetent to read, analyze and correctly submit the financial information to UPGRADE shareholders. I was not actually invited to a key meeting on this issue. I was ready to attend it, all that was needed - just to change the time for one hour.

All that resulted into a complete fiasco in negotiations and emotional attempts to place the blame with the management.

I hope that now the reasons why I request the initial information received by Jason in April are clearer. I want to examine the original data from the UPGRADE owners, not the interpretation of Jason and his colleagues.

The number of mistakes exceeds reasonable limits. Unnecessary emotionality is supported by impudent obstinacy. All that leads to a dead end. I offer to withdraw Jason's mandate to negotiate a deal with UPGRADE.

I am ready to start from tomorrow and spend 50-70o/o of my time on negotiations with UPGRADE".

Tasks. How does Yva.ai recognize them in text?

Yva.ai platform understands a task as the presence of instructions for a specific person or group of employees in the message's informative part.

Examples of tasks:

  • Print out, sign and send the courier with the documents to the address below.

  • Schedule a meeting, discuss my proposal and inform me about the result.

  • John, choose an acceptable option and give appropriate instructions.

Prepared by: Vyacheslav Seledkin