Today, most tools used in S&R documentation exist in a digital format (video/text)
Digital libraries, e-books, databases, information processing software, science-sharing platforms – all reside on the internet / intranet and are influenced by its evolution.
Until recently, informing oneself and promoting something in the digital world implied mostly data security.
Now, researchers and scientists face much more “unseen problems” generated by AI algorithms (classification, discrimination) or by Internet politics (regionalization and regional influence of information processing).
Classification algorithms refer to the labelling of information on the Internet.
Even if the platforms that we use have labeling tools and apply SEO, it is possible not all information to be seen or correctly processed by AI, due to some limits:
- AI doesn’t know all the words from all languages, so it won’t be able to make quick and correct correlations.
- AI doesn’t understand the meaning of words identically to humans.
AI is taught the meaning of words from dictionary-like databases, while man has learnt much of the words contextually, intuitively.
- AI has a much diminished sphere of vocabulary and understanding of the message (concepts, expressions, metaphors, nonverbal language, emotions etc.).
- AI works on non-global platforms, so it does not integrate information.
This results in all high-performance human-machine communication projects being punctual: Siri, Alexa, Cortana – applications that still don’t have a use in science and research.
The more AI learns using large databases and it is trained in predictable contexts (with a clear history on the input-output relationship), the greater its performance in knowledge, correct reactions, nuance-learning, will be.
New domains, frontier and integrative sciences, which are under-represented in the historical data on which algorithms are based, will be much more affected by the inability of AI to perform.
This development will also be slowed down however, by the scientific anti-espionage policies, which cause a significant part of data not to be shared; all the more so, since intellectual cracking is on the rise, both in online and offline.
Since there is no globally standardized Internet, a lot of information is regulated, and part of the unregulated one is not perceived in the same way by all people.
Local values and customs are manifested in online too, influencing the understanding, reactions and attitude that one can have about a subject, concept, idea, technique or method.
Regionalization of the Internet
Since 2018, China and Russia have announced transferring the Internet networks on their own national servers, where you have acces only based on official local documents.
Even if the two powers have never had a policy of transparency and marketing in their research, the regionalization of the Internet will further isolate the rest of the researchers from what is happening in the laboratories of the two research leaders, as the latter will still have access to the “original internet”.
For S&R this could mean new ethical issues and the introduction of new transparency and science-sharing standards.
Will these new standards come into conflict with current research – sharing policies with the community?
NLP-driven applications must not only hear what we say, but understand and even reply in more human ways
Improving performance on the Internet
Tips&tricks for Science & Research:
- use English predominantly
- search for information by frequently used tags
- give as clear as possible names to your videos and use many tags
- use a new tag repeatedly, to “enforce it” into the algorithm
- use the meaning of the words as close as possible to their meaning in the dictionary
- use platforms that integrate content
- use a platform for a longer time to understand its algorithm – not all of them work the same
- promote science and research on the internet with tags that are popular within the community, so that algorithms will display your work among their first search results