Amazon Transcribe Medical

Digitization of the healthcare sector

In recent years, the healthcare sector has begun to actively embrace modern digital solutions - from telemedicine applications, connecting residents of the most remote, “hard-to-reach” regions to world-class medical services, to the use of sensors and devices that help remotely monitor and record patient physical data such as: heartbeat, blood pressure, movement and even behavioral patterns. The unique challenges of Covid-19 have played a decisive role in accelerating the digitization of healthcare, when it became clear that many processes in the healthcare sector require a fundamental transformation.

Currently, medicine has a variety of digital tools to improve communication, administrative and operational processes, data storage and transition.

One such tool that facilitates the work of medical professionals is Transcribe Medical Service from Amazon.

 

What is Amazon Transcribe Medical?

In the past, writing paper reports took doctors a lot of time. And after the beginning of digital transition there is a standard requirement for healthcare providers to enter medical records into Electronic Health Record (EHR) systems on a daily basis. According to a study held by the University of Wisconsin and the American Medical Association in 2017, primary care physicians in the United States spent up to 6 hours a day entering this data.

In 2019, Amazon launched a service built on top of the Amazon Transcribe. It was designed specially for healthcare professionals to transcribe medical-related speech, such as physician-dictated notes, drug safety monitoring, telemedicine appointments and consultations, or conversations of doctors with patients. 

The Amazon Transcribe Medical service uses machine learning and natural language processing (NLP) to accurately convert audio speech or conversation to a text. It is trained to understand complex medical language and special terms and measurements used by doctors. Developers can use Amazon Transcribe Medical for medical voice applications, by integrating with the service’s easy-to-use APIs. Pharmaceutical companies and healthcare providers can use Amazon Transcribe Medical to create services that enable fast and accurate documentation. 

The service can transcribe speech as either an audio file or a real-time stream, the input audio can be in FLAC, MP3, MP4, Ogg, WebM, AMR, or WAV file format. Streaming transcription is available in US English, it can produce transcriptions of accented speech, spoken by non-native speakers.

This service provides transcription expertise for primary care and specialty areas such as cardiology, neurology, obstetrics-gynecology, pediatrics, oncology, radiology and urology. Transcription accuracy can be improved by using medical custom vocabularies.

 

Transcribe Medical use cases

Medical dictation: medical specialists can record their notes by speaking into the microphone of a mobile device during or after interacting with a patient, being able to reduce the administrative workload and focus on providing quality patient care.

Drug safety monitoring: transcribing of phone calls regarding drug appointment and side effects enables more safe service provisioning by pharmaceutical companies and clinics. 

Transcribing of conversations: Recording conversations between a doctor and a patient in real time without disrupting the interaction, allows healthcare providers to accurately capture details such as mentioned symptoms, medicine dosage and frequency, side effects. This information can be processed through subsequent text analytics and then entered into Electronic Health Record (EHR) systems.

In case of online video or phone consultations Channel Identification feature can be used. This is a powerful tool allowing to independently transcribe the patient and clinician audio channels and provide real-time conversational subtitles.

 

Benefits of Amazon Transcribe Service

Amazon Transcribe Medical benefits a wide range of healthcare specialists: nurses, physicians, researchers, insurers, and pharmaceutical companies - as well as their patients. The following features make it highly attractive to clinicians and healthcare professionals:

HIPAA eligible - providing support for the automatic identification of protected health information (PHI) in medical transcriptions Amazon Transcribe Medical reduces the cost, time, and effort expended on identifying PHI content through manual processes. PHI entities are labeled clearly with each output transcript, making it convenient to build additional downstream processing for a variety of purposes, such as redaction prior to text analytics.

Highly accurate transcription - the narrow specialization of the service, exclusively aimed  at the needs of the healthcare sector, ensures that even the most complex medical terms, such as the technical names of diseases and medicines, are recorded correctly. 

Improving the patient and practitioner experience - so that the doctor does not have to waste time taking notes and writing reports, but can focus on the patient, accurately transcribing all the details of the consultation or conversation without disrupting the interaction

Easy to use - no prior knowledge or experience with machine learning is required. Developers can focus on building their medical speech applications by simply integrating with the service's APIs. Transcribe Medical handles the development of state-of-the-art speech recognition models.

Thus, Amazon keeps investing into the medical sector, empowering healthcare and life sciences, and enhancing the number of digital services to deliver patient-centered care, accelerate the pace of innovation and unlock the potential of data, while maintaining the security and privacy of health information.

 

Our experience

With extensive experience in building healthcare applications based on Amazon services and developing long-term partnership with global leaders in telemedicine technologies & services, we, Inmost Company, took the opportunity to ease the burden of reporting and documentation for our clients by integrating Transcribe Medical into the application for remote medical consultations. This has significantly optimized medical staff workload, streamlined processes, and increased positive feedback from patients.

Based on these experiences, we consider Amazon Transcribe Medical Service to be a really important and useful tool for transforming medical services. 

And, of course, we are ready to support healthcare organizations on their digital transformation path by providing consulting services, renovating and improving existing platforms or developing efficient and reliable solutions from scratch.

 

Zigbee

According to Statista the quantity of IoT devices is about 43 billion now.

And by 2025, 75 billion IoT devices are predicted to be online and Statista predicts that rather a lot of those devices will be in areas that lack a standard connection.

The future of IoT will be built through open networks and collaboration. Until the future has not come, let's discuss the variants of connection for nowadays.

I think there is no need to mention BLE, Wi-Fi, or 5G. There is no competition between these networks – rather, they are complementary.

Let’s speak about Zigbee. What is this technology different from above mentioned?

Zigbee and "what it is eaten with"

Zigbee is a standards-based wireless technology developed as an open global market connectivity standard to address the unique needs of low-cost, low-power wireless IoT data networks. The Zigbee connectivity standard operates on the IEEE 802.15.4 physical board radio specification and operates in unlicensed radio bands including 2.4 GHz, 900 MHz and 868 MHz.

Specifications of Zigbee

The Zigbee specifications, which are maintained and updated by the Zigbee Alliance, boost the IEEE 802.15.4 standard by adding network and security layers in addition to an application framework.
In theory, it enables the mixing of implementations from different manufacturers, but in practice, Zigbee products have been extended and customized by vendors and, thus, plagued by interoperability issues. In contrast to Wi-Fi networks used to connect endpoints to high-speed networks, Zigbee supports much lower data rates and uses a mesh networking protocol to avoid hub devices and create a self-healing architecture.

There are three Zigbee specifications: Zigbee PRO, Zigbee RF4CE and Zigbee IP.

Zigbee PRO aims to provide the foundation for IoT with features to support low-cost, highly reliable networks for device-to-device communication. Zigbee PRO also offers Green Power, a new feature that supports energy harvesting or self-powered devices that don't require batteries or AC power supply.

Zigbee RF4CE is designed for simple, two-way device-to-device control applications that don't need the full-featured mesh networking functionalities offered by the Zigbee specification.

Zigbee IP optimizes the standard for IPv6-based full wireless mesh networks, offering internet connections to control low-power, low-cost devices.

Mesh network

Mesh networks are decentralized in nature. It’s flexible, reliable and expandable - End Node, Router or Coordinator, where nodes can communicate peer-to-peer for high speed direct communication, or node to Gateway.

Zigbee and Z-wave are two well-known mesh networking technologies. In a mesh network, nodes are interconnected with other nodes so that multiple pathways connect each node. Connections between nodes are dynamically updated and optimized through sophisticated, built-in mesh routing tables.

Security

Zigbee is inherently secure. It provides options for authentication and data encryption. Zigbee uses 128-bit AES encryption keys, similarly to its primary competitor, Z-Wave (all pros and cons of Z-wave will be considered in the next article).
This plus short-range signals make Zigbee secure. However, most home automation protocols have similar levels of security when you configure them properly. 

Power consumption

Power consumption for Zigbee is comparable with BLE. However, the proven, routed mesh mechanism adopted in Zigbee makes it slightly more power efficient.

What Is Zigbee Compatible With?

The devices are controlled by Samsung SmartThings and Zigbee. Amazon Echo Dot,  Philips Hue, IKEA Tradfri. Hive Active Heating is a device that uses natural gas and has accessories. Honeywell manufactures a variety of thermostat products.

Conclusion. Why choose Zigbee?

Comparing Zigbee with existing variants of connections, it’s obvious that Zigbee offers multiple advantages over Bluetooth.

For example, BLE works best for smaller size packets (i.e. less than 12 bytes). For smaller size (less than 12 bytes), its comparable to Zigbee but as packets size starts increasing BLE higher layers do the fragmentation and cause latency to increase.

However, BLE has a cost advantage over Zigbee. BLE mesh has a bigger eco system and uses the same BLE chipset used in other applications, therefore high scale production of BLE chipsets pulls down the cost of IC compared to Zigbee.

Need of gateway device for Zigbee further increases the cost of the overall system. BLE based systems can provide limited functionality (everything except full-fledged internet connectivity) without a gateway as well. In addition, licensing of Zigbee is more expensive and complex than BLE.

Meanwhile, Zigbee is more cost-effective and uses significantly less energy than Wi-Fi, resulting in better battery life. To speak about another “rival” LoRaWAN, it’s significantly cheaper than Zigbee and  they are close by some characteristics.

And if you are looking for a cheap and  long battery life sensing project, where no real-time, control or automation requirements are anticipated and slower poll-rates are suitable, then LoRaWAN is a good contender and is a good choice for many entry-level sensing applications.
But, if it is necessary to control automation or faster poll rates, it’s better to step up to Zigbee. As it was mentioned Z-Wave will be considered next time.

Where to use?

The Zigbee wireless communication system is used by homes, businesses, and other locations to communicate.
Zigbee can transmit data over a long distance, which is sufficient for most applications. Zigbee is a clear winner for industrial applications that require reliability, real-time monitoring, control or automation and this protocol is highly under-rated for low power sensing.

 

 

NFT – the most contradictory component of the Web 3

There are probably no people left who have not heard of NFTs yet.

Being a vital component of Web-3: the next iteration of the Internet, along with the Metaverse and De-Fi, NFTs evoke, perhaps, the most contradictory feelings in society - from enthusiasm, sometimes bordering on insanity, to outright hostility and harsh criticism.

NFT - a non-fungible token - is a unique unit of data that is verified and stored in the blockchain and can be linked to digital or physical assets to provide immutable proof of ownership. Blockchain technology allows NFTs to be tracked in an immutable digital ledger that provides a history of assets and can be verified at any time. So, NFTs cannot be replicated, destroyed, or counterfeited.

NFTs are primarily created on Ethereum, but other blockchains support them as well.

For selling NFTs, they must first be minted. Minting an NFT means converting a digital file into a digital asset that can be published and stored on the blockchain, making it available to potential buyers. The minting process is not free - you need a crypto wallet and a certain amount of crypto currency to cover the Ethereum "gas fees." The most popular NFT marketplaces on the Ethereum blockchain are: OpenSea NFT, Rarible, and Mintable.

Today, almost anything can become an NFT: paintings, photos, videos, music, gifs, memes - any kind of unique art that can be represented digitally. Or it can even be real estate, collectibles, event tickets, website domains or tweets.

Famous auction houses Christie’s and Sotheby’s have already made sales of NFT artworks for several hundred million dollars.

The first experiments with NFT started back in 2013, but the wave of hype rose only in 2021, and sometimes it looked like real insanity. So, the creator of the Nyan Cat meme received $580,000 in cryptocurrency for a gif with the famous cat meme, and the digital artist Beeple sold the token of jpeg collage - Everydays: The First 5000 Days, for $69.3 million.

NFT technology is actively used by both well-known and not yet recognized artists. The main factor in the growing popularity of NFT is the opportunity for a beginner to present his or her work to a wide audience. A few years ago, a new artist had to work hard for several years before reaching the first serious exhibition, and still success was not guaranteed. Today it's enough to convert your painting into a digital format, create a corresponding NFT token (it's not that complicated) and sell it for real money.

Actually, NFT technology can be used for transactions with any digital assets, however, the recent trends show a growing interest in selling real things as NFTs. These can be, for example, sculptures, antiques, a coin collection, etc. But if converting paintings into a digital format is a common thing today, how can a real physical object become an NFT?

The first way is to create a 3D digital copy of the object. Technologies that allow the average person to create such copies are becoming more and more available. And of course, it attracts a lot of interest from businesses and corporations that have been already using and investing in 3D technologies to promote their brands and products not only in the real world, but also in the virtual world of the metaverse. In addition, NFTs of 3D objects are expected to replace our favorite real-world things, objects, and assets in the metaverse, making it even more similar to our everyday environment.

But what if we take objects that are difficult to digitize? Last month, for example, a three-bedroom house in South Carolina was sold as an NFT for $175,000. The buyer indicated that he was able to make the transaction for that property with just one click. How does it work?

In simple terms, the selling company creates an NFT that represents ownership of the house. Those who buy this NFT become the owner of the property. Despite the fact that the purchase is made digitally, the ownership is considered absolutely real - whoever owns the NFT owns the house in the real world.

Although such transactions are still viewed with suspicion by the majority, there are serious reasons to believe that NFT technology could open the door to a decentralized economy without intermediaries such as banks or a government. In the future, it may completely change the rules of the markets.

Today, besides the arts and real estate, the most potential for NFTs have gaming, education,  healthcare, supply chain and logistics industries. NFT tokens can be used to confirm any important document: a diploma, health records, a marriage registration certificate, etc.

A serious barrier to NFT adoption into the mentioned areas is the lack of government regulation. And it is very likely that in case of a fraud or hacker attack, the affected party will not be able to recover its losses.

Moreover, NFT technology faces many other challenges today. For example, one of the main arguments against NFT is its huge energy consumption and extremely negative impact on the environment. However, after Ethereum has switched from a power-intensive Proof-of-Work protocol to a mining-free Proof-of-Stake, it is possible that NFT will become more eco-friendly and increase its audience.

Since NFT has both - the devoted fans and haters, it remains one of the most hype-boosting components of web-3, and is mentioned every now and then in the news and social media, either along with the figures containing impressive number of zeros, or along with facts that cause a no less impressive number of questions and misunderstandings.

 

Transcribe

Transcribe Service was launched by Amazon in 2017 enabling developers to implement a speech-to-text feature to their applications.

Analyzing and data extraction from audio files is almost impossible for computers. To use such data in an application, speech must first be converted to text. Services performing speech recognition technologies have certainly existed before, but they were generally expensive and poorly adapted to various scenarios, such as low-quality phone audio in some contact centers.

Powered by deep learning technologies, Amazon Transcribe is a fully managed and continuously trained automatic speech recognition service that automatically generates time-stamped text transcripts from audio files. The service parses audio and video files stored in many common formats (WAV, MP3, MP4, AMR, Flac, etc.) and returns a detailed and accurate transcription with timestamps for each word, as well as appropriate capitalized words and punctuation. For most languages, numbers are transcribed into a word form, however for English and German languages Transcribe treats numbers differently depending on the context in which they're used.

Now Transcribe supports 37 languages.

Transcription methods can be divided into two main categories:

  • Batch transcription: transcribing media files that have been uploaded into an Amazon S3 bucket.
  • Streaming transcriptions: Transcribe media streams in real time.


Here are some of the features it provides:

  • Single and multi language identification: identifying the dominant language spoken in your media file and creating a transcript. If speakers change language during a conversation, or if each participant speaks a different language, your transcription output correctly detects and transcribes each language.
  • Transcribing multi-channel audio: combines transcriptions from multi channel audio into a single output file. It is possible to enable channel identification for both batch processing and real-time streaming.
  • Speaker diarization: the partition of the text from different speakers, detecting each speaker in the provided audio file.
  • Custom language models: designed to improve transcription accuracy for domain-specific speech. This includes any content that goes beyond the everyday type of conversations. For example, an audio recording of a report from a scientific conference will obviously contain special scientific terms that standard transcription is unlikely to be able to recognize. In this case, you can train a custom language model to recognize the specialized terms used in your discipline.
  • Custom vocabularies: are used to improve transcription accuracy for a list of specific words. These are generally domain-specific terms, such as brand names and acronyms, proper nouns, and words that Amazon Transcribe isn't rendering correctly.
  • Tagging: adding custom metadata to a resource in order to make it easier to identify, organize, and find in a search. 
  • Subtitles: can be used to create closed captions for your video and filter inappropriate content from your subtitles.

Transcribe offers indispensable features for call centers and support services. It helps to capture useful insights by transcribing customer calls in real time. Analyzing and categorizing calls by keywords, phrases and sentiment can help track negative situations, identify trends in customer issues or allocate calls to specific departments.

It is possible to measure the volume of speech. This metric helps to understand if the customer or employee is talking loudly, which is often an indication of being angry or upset. The quality of communication with the client can also be determined by setting the following metrics: interruptions, non-talk time, talk speed, talk time.

Besides call-centers, Transcribe Service can be useful in almost any field: education, law, e-commerce, and many others. For example, Amazon Comprehend Medical is a machine-learning-powered HIPAA-eligible service pre-trained to identify and extract health data from medical texts, such as prescriptions, procedures, or diagnoses. 

It is difficult to imagine modern technologies without a service that can transform speech into text. And of course, Transcribe has analogues from other digital giants. However, it is worth noting that a large number of developers who have leveraged Amazon service, admit a much higher quality and accuracy compared to similar solutions provided by the current market.

 

 

NVIDIA chipsets for IoT

We have already discussed the chipsets we worked with in one of our projects with static IoT devices. Now time is coming to know more about chipsets for moving IoT devices.  So, NVIDIA chipsets why the Customer gave his heart to it. 

NVIDIA boards became famous and got a reputation among gamers and graphics designers (GeForce series) a time ago, and now NVIDIA has Jetson series.

The first board was the TX1 released in November, 2015.  Now NVIDIA has released the more powerful and power-efficient Jetson TX2 board.

The Jetson boards are siblings to NVIDIA’s Drive PX boards for autonomous driving and the TX2 shares the same Tegra “Parker” silicon as the Drive PX2.

There are many synergies between the two families as both can be used to add local machine learning to transportation. The Drive PX boards are designed for automotive with extended temperature ranges and high reliability requirements. The Jetson boards are optimized for compact enclosures and battery power for smaller, portable equipment.

With devices such as robots, drones, 360 cameras, medical, etc., Jetson can be used for “edge” machine learning.  The ability to process data locally and with limited power is useful when connectivity bandwidth is limited or spotty (like in remote locations), latency is critical (real-time control), or where privacy and security is a concern.

Another innovative solution from NVIDIA - Jetson Nano.

Jetson Nano development board is also a powerful small artificial smart computer, which only needs to insert a MicroSD card with a system image to start, built-in SOC system-level chip, can have a parallel hand, such as Tensorflow, Pytorch, Caffe / Caffe2, Keras, MXNET and other neural networks that can be used to achieve functionality such as image class, target detection, speech segmentation, and intelligent analysis. Usually used to build autonomous robots and complex artificial intelligence systems.

The Customer had chosen this chip for his moving device, because it was extremely important to detect obstacles and define direction. All the tasks were covered by the chipset functionality rather successfully.

You may ask why not to choose Raspberry Pi  all the more reasonably priced by the way.

Raspberry was considered as an alternative. In fact, they are actually very similar in primary functions, and all can develop some special functions, such as ARM processors, 4GB RAM, and a series of peripherals.

As for video-out: the Nano has both HDMI 2.0 and DisplayPort available, which can be used at the same time. The Pi is limited to either its HDMI port or its proprietary display interface, which as far as we at Inmost know cannot be used simultaneously.

They both have multiple ways of interfacing, including I2C, I2S, serial, and GPIO, but we also appreciate that the Nano has USB3.0 and Gigabit Ethernet.

However the biggest difference is that the Raspberry Pi has a low power VideoCore multimedia processor, and Jetson Nona contains higher performance, more powerful GPUs (graphics processors), which makes it support some functions that Raspberry Pi Can't do. Then Jeston Nona makes some more depth developments possible and has more potential in development.

For our customer's project, fast processing of video from the camera is the number one task, so it was clearly decided to use Jetson Nano to solve this problem.

The NVIDIA Jetson system is high performance and power-efficient, making it one of the best and most popular platforms for building machines based on AI on the edge (Edge Machine Learning).

Private blockchain

“Our virtues are generally but disguised vices” – La Rochefoucauld

 

Why at all do we need a private blockchain?

One of the blockchain types supported by Amazon is HyperLedger Fabric. There is a special Amazon Managed Blockchain service for Hyperledger Fabric on the AWS platform, that simplifies the work related to setting up blockchain networks and reduces the time to deploy solutions based on it. Hyperledger Fabric is a private blockchain, so let's first look closer: what tasks can a private blockchain perform in general?

As you know, a public blockchain has three main properties:

- Decentralization - there is no single node or a dedicated group of nodes that store any information separately - information is duplicated in an amount equal to the number of users in the system.

- Transparency - every user has access to the entire database and can track all changes.

- Reliability - all the records form a chain, and each new record is linked to previous ones by a special mathematical function that depends on the data in the previous elements of the chain. This ensures that the data cannot be changed retroactively.

 

All these components together allow you to build an information storage system where each individual element (user) is untrusted, but in combination, they form a trustworthy repository.

Why is this concept not suitable for an enterprise environment?

First of all, the lack of user identification. This is especially critical when performing financial transactions in an enterprise environment. In 2016, the concept of KYC (Know Your Customer) appeared in the official documents of the Department of the Treasury to combat financial crime FinCEN USA, which requires financial institutions to identify their customers before allowing them to conduct financial transactions.

In addition, in 1989, the FATF introduced the principles of AML (Anti-Money Laundering) - measures to combat money laundering. And these principles require user identification. Thus, there are powerful arguments why an enterprise blockchain should be private but not public.

Is this the only difference? If we create a blockchain network with access to only authenticated users, then is it possible to use other items available in the public blockchain architecture for an enterprise system? No, it is not.

On a public blockchain, we use various consensus mechanisms to validate a transaction that adds a new block of data to the chain. All of these mechanisms rely on all network users participating in the validation process and receiving reward for this participation in one way or another.

And to pay this reward, each public blockchain invents its own cryptocurrency. Cryptocurrencies and public blockchains cannot exist without each other. Currently, there are over 10,000 different cryptocurrencies in the world. This amount significantly exceeds the number of fiat currencies, the value of which is guaranteed by the states that issue them. It is quite difficult to release a new cryptocurrency that has at least some value to the public.

This idea is not suitable for a corporate network. For two reasons:

- First, keeping a complete copy of the entire database on the computers of every employee with access to the corporate network in order to participate in the consensus will not cause enthusiasm among the security services in any corporation, no matter how powerful encryption algorithms are protecting information.

- Second, the idea of ​​introducing an internal cryptocurrency in the corporate network also seems strange.

 

This leads to the conclusion that a private blockchain needs a consensus mechanism based on a centralized algorithm.

So, what is left of the original idea? First, we consistently abandoned decentralization, although the decentralization level of public blockchains based on the Proof-of-Stake consensus mechanism is very doubtful, and then transparency was dropped. In the end, only “all records form a chain and each new record is linked to the previous through a special mathematical function that depends on the data in the previous elements of the chain. This ensures that the data cannot be changed retrospectively.

 

In fact, this is really not so few. We have obtained reliable data storage located in the corporate network and the information in it cannot be faked, no matter what the access rights of the person who wants to do it. And due to the fact that there is a central node or a set of nodes responsible for confirming transactions, the recording of information is significantly accelerated compared to public blockchains. There are many applications for such reliable storage with fast access in corporate networks. We will look closer at some of them soon, and will talk about how Hyperledger Fabric solves this problem.

 

DeFi

We have mentioned the decentralized finance system - DeFi, talking about the trading metaverse - Metafi (http://www.inmost.pro/blog/metafi-first-social-trading-metaverse/). 

As one of the hottest topics in the crypto world over the last few years, Defi is definitely worth a closer look.

DeFi is a global blockchain-based financial system built to meet the needs of the new Internet iteration - Web-3. It is an alternative to tightly controlled traditional systems with outdated infrastructure and processes. It allows you to control and have direct access to your money. DeFi eliminates the fees that banks and other financial institutions charge for using their services. People store money in a secure digital wallet and funds transfer takes only a few minutes. It also provides access to global markets and creates alternatives to local currency or banking solutions. Any traditional services provided by financial institutions can be expected to be offered through DeFi. 

While not everyone has the ability to open a bank account and use traditional financial services, anyone with access to the Internet can use services offered by DeFi products. Currently, tens of billions dollars in cryptocurrencies have flowed through DeFi programs, and the number of transactions is increasing every day.

DeFi markets are always open and there is no centralized authority limiting their working time, blocking payments or denying access. This decentralization aspect is considered to be one of the main advantages of DeFi.

To provide services without third parties, DeFi uses cryptocurrencies and smart contracts, transferring trust from intermediaries to machine algorithms.

A smart contract is a self-executing contract where the terms and conditions are defined and applied through automation and approved autonomously and efficiently on the blockchain. No one will be able to change a smart contract once it is launched: it will always work as programmed. Smart contracts are public, so anyone can view and monitor them. This means that the community will be able to quickly detect a compromised contract and react accordingly.

Security, privacy and transparency of DeFi services also base on fundamental advantages of blockchain as the records of information in chain blocks cannot be changed or controlled by any authority.

Even though most DeFi services are now built on Ethereum, Bitcoin was the real DeFi pioneer, giving the ability to own, control, and send assets anywhere in the world. Bitcoin is open to everyone and no one can change the rules. Its concepts really differ from the traditional financial world, where governments can print money and devalue your savings, and companies can shut down markets.

Now, Ethereum is the ideal foundation for DeFi. Most of DeFi products are actually powered by Ethereum. Therefore, many of them can be easily configured for interaction. You can borrow tokens on one platform, and exchange them on another market and in a completely different program. Tokens and cryptocurrency are written into the Ethereum blockchain, and a shared ledger that tracks transactions and ownership is one of Ethereum's unique features.

Like every other system DeFi is composed of different parts. Its infrastructure consists of layers that are responsible for various processes and guarantee the smooth functioning of transactions and contracts:

  1. Settlement Layer: also called Layer 0. Based on Ethereum blockchain it serves as a foundation for all DeFi transactions, writing code or building applications. This is the vital component - the DeFi system can not exist without blockchain.

  2. Protocol Layer: defines rules and standards for all DeFi transactions, it is a description of the specific conditions necessary for the code to run accurately and fulfill its tasks. All the protocols are interoperable and can be used to create any application in the DeFi ecosystem.

  3. Application Layer: consists of decentralized applications or dApps - products created on the basis of two previous layers that serve as a kind of front-end in the DeFi ecosystem, enabling consumers to use DeFi services. With dApps you can buy, sell, trade, lend, and borrow cryptocurrencies on a decentralized network.

  4. Aggregation Layer: at this layer third-party vendors create end-to-end solutions by bringing together existing decentralized applications and offering users and investors a wide range of financial services in one place.

 

The list of DeFi services is constantly growing, here are some of them:

  • Money transactions around the world

  • Access to stable currencies

  • Loans

  • Deposits

  • Trading

  • Investments

  • Insurance

However, despite the great financial freedom offered to users, serious challenges regarding DeFi still exist. For example, a lack of consumer protection. DeFi is free of rules and regulations. But it means that users often have no legal protection if something goes wrong. There are no government reimbursement systems for DeFi and no laws requiring capital reserves for DeFi service providers.

The problem  is that all the rules and restrictions which could potentially protect the user do not apply to the decentralization concept. So, the path forward may be unclear, but it will certainly be important for DeFi investors to monitor the evolution of the regulatory environment for this new financial sector.

Despite all the concerns and the so far insufficient resistance to hacker attacks, DeFi would gradually break the monopoly of traditional financial institutions and decrease the cost of traditional financial services by removing barriers and giving everyone equal access to the financial infrastructure.

 

 

ESP32 Overview

IoT hardware is at the heart of every connected project. 

However, choosing the IoT hardware exact for your project can be overwhelming due to the sheer number of development boards and modules in the space. 

In our practice we ruled by the Customer choice. 

However doubtlessly, it is useful to know more about the board's specifications and possibilities. 

The one of the chipset we worked with in our projects is ESP32 by Espressif Systems.

Espressif is a fabless semiconductor company that develops Wi-Fi and Bluetooth low-power IoT hardware solutions. 

They are most well-known for their ESP8266 and ESP32 series of chips, modules, and development boards. 

In fact, many development boards across the industry run on Espressif chips (like Sparkfun’s development kits).

Espressif development boards are designed for simple prototyping and interfacing but can be used as a simple proof of concept or enterprise solution. Espressif also offers several software solutions designed to help you manage devices around your home and integrate wireless connectivity into products. Some of the IoT development boards they offer are:

2.4 GHz Wi-Fi & BT/BLE Development Boards  —  These boards provide PC connectivity, 5V/GND header pins, or 3V3/GND header pins ESP-IDF source code and example applications. These boards support everything from image transmission, voice recognition and come with a variety of possible features, such as onboard LCD, JTAG, camera header, RGB LEDs, etc.

2.4 GHz Wi-Fi Development Boards  —  Standard set of development boards that integrate the commonly-used peripherals.

As was mentioned you can surely use ESP32 for prototyping/establishing Proof of Concept (PoC). If you need to use several devices, ESP32 is perfect for your app.

One of the major advantages of ESP32 is the presence of inbuilt WiFi and Bluetooth stacks and hardware. 

Therefore, ESP32 will be your choice of microcontroller in a static application where good WiFi connectivity is guaranteed, say an heating equipment monitoring application in, say, a static appliance. The presence of WiFi stack on the module itself means you will have saved money on an additional networking module. 

However, if you use ESP32 in an asset tracking application, where it keeps moving around, you will have to rely on a GSM or LTE module for connectivity to the server (because you will not be guaranteed WiFi availability). In such a scenario, ESP32 loses the competitive advantage. We will discuss a more suitable board for moving devices next time.

To recap, ESP32 has specs that are good enough to accommodate most of your applications. When scaling up production, you need to just make sure that the specs are not too much for you.

In other words, if you can get the desired output with modest specs, you may be better off using a cheaper microcontroller and save money. These savings become significant when your production numbers increase by orders of magnitude.

However, production aside, ESP32 is definitely the ideal microcontroller for prototyping and establishing the PoC. That was the reason, why our customer preferred this board for his prototype.

 

 

Dogami – petaverse

Remember Tamagochi - a digital pet from the past? Perhaps, the popularity of this toy, which seems quite primitive today, lies in the possibility of adopting a pet, even for those who are unable to do so for whatever reason.

For example, many people like dogs, but few of them have enough time to spend with the pets and are able to create all the necessary conditions for their care. Soon this problem can be solved by a quite realistic and colorful petaverse - Dogami.

Players (Dogamers) can adopt virtual dogs, play games and compete with others. It will be possible to interact with the pet through augmented reality using the Dogami app, available for any type of smartphone on iOS or Android.

According to developers, Dogami petaverse roadmap includes the implementation of lands saling and the dogs crossbreeding - there will be an opportunity to mint a new puppy NFT with a random gender from two virtual pets NFTs.

The game is built on the Tezos blockchain which provides low gas fees, fast data processing and, as the developers assure, clean NFTs - having very low carbon footprint due to the most energy-efficient blockchain technology. The utility token of petaverse ‘$DOGA’ can be used to raise your dog, buy event tickets, consumables and to create your digital wardrobe purchasing virtual accessories and luxury items such as caps, bucket hats, varsity jackets, bandanas, beds, bed pillows, hoodies and belt bags in the marketplace. 

By the way, Gap Inc. - American worldwide clothing and accessories retailer teamed up with  Dogami to launch the first fashion collaboration in the petaverse. Each item of the collection will be available in the game to create an individual style for your virtual pet. 

And, of course, it's worth mentioning the play-to-earn concept of Dogami. You can earn  ‘$DOGA’ completing day challenges and being rewarded for multiple days of consistent play. The amount of earnings strongly depends on your game level - at level 1, you can only earn a maximum of 5 $DOGA a day, but by level 10  it is already up to 50 $DOGA.

On September 14, first 100 Dogamers got the chance to participate in the early stage of the game and support Dogamí development. 

Even though Dogami was not yet fully launched, it already has a huge amount of attention and positive feedback, since the theme of the game has a really large audience. In this case you can only imagine the potential of the Metaverse for cats!

 

MetaFi – first social trading Metaverse

We have already written a lot about metaverses. We talked about metaverses where you can discover new lands, farm, live in a giant skyscraper, and even about a pet metaverse. We have mentioned that the fashion, entertainment, and sports industries are actively investing in their meta-future. After all, they understand the opportunities of brand promotion in the virtual world, because the metaverse is not just a realistic 3D virtual game, but a huge ecosystem with immense potential and a rapidly developing economy. According to industry experts, the Metaverse will far surpass the real world economy in the coming years.

The metaverse economy is built on cryptocurrencies - decentralized digital money based on blockchain technology. Decentralized Finance, or DeFi, is a platform that allows investors to trade financial products over a decentralized blockchain network.

Transactions do not require intermediaries such as banks or brokers. DeFi accepts investors who do not have a government ID, brokerage or bank account, proof of residency, or social security card.

What happens when you combine the Metaverse and DeFi? Welcome to MetaFi!

The developers of MetaFi claim that the current market products are not able to make Web3 truly decentralized: it is actually being traded, and talked about, on Web2. MetaFi changes this forever by making trading truly social on Web3.

The MetaFi World is divided into futuristic trading zones focused on each kind of asset -  you can trade tokens, NFTs, tokenized stocks, commodities, bonds.  

The $METAFI token is hyper-deflationary by design: the deflationary mechanism is based on the fact that as more users enter the metaverse and trade, more fees will be generated that will be used for activities that reduce MetaFi's circulating supply (buying and burning, providing liquidity, placing bets).

To enter the MetaFi world, you will need to create your virtual avatar - MetaFi Citizen and connect it with your crypto wallet.

One of the most  important features of MetaFie is the ability to communicate - participants can chat, exchange text and voice messages, images, share  knowledge and show their trades in real time. In this world, everyone is trading together and trying to beat the market using collective thinking, research and technical analysis.

“Most trading platforms compete with nearly identical products. MetaFi reimagines the trader’s experience: making trading fun, engaging, and social. With the Trading World, MetaFi will aggregate major decentralized protocols with deep liquidity, wrapped in a new gamified trading environment, designed to be a seamless onramp for non-crypto audience.”- Matt Danilaitis, Founder of MetaFi said. 

This month, MetaFi announced the successful completion of a $3 million funding round. So, now  ​​the company with its web-3 virtual trading platform is valued at $25 million.

The booming interest in cryptocurrencies and metaverses, as well as the opportunity to improve trading skills, communicating and exchanging experience with other participants, make MetaFi very attractive to the crypto community. Currently there are over 100,000 participants on the waiting list.

 

Is there an alternative to PoW and PoS algorithms?

What is a consensus algorithm in the blockchain? Since decentralized networks require tools to agree on decisions and to ensure overall reliability, a mechanism for coordinating system processes has been developed. This mechanism is called a consensus algorithm. It is a decision-making procedure aimed to prevent a network from centralized control and to ensure that everyone follows the rules.

One of the main differences between various cryptocurrency networks is the type of consensus algorithms used.

In the article about Ethereum Merge (http://www.inmost.pro/blog/ethereum-merge/), we talked about the two currently most commonly used algorithms: proof-of-work and proof-of-stake. Both have their pros and cons, their supporters and opponents. And while the crypto industry rapidly evolves, developers will continue to come up with new solutions searching for the perfect one.

Let us now take a look at what alternative consensus algorithms exist, although they are less popular than proof-of-work and proof-of-stake.

 

Proof-of-Burn (PoB)

Unlike the proof-of-work, this algorithm is not very energy consuming. Miners do not need to invest in physical resources - powerful hardware. Instead, they invest cryptocurrencies (coins) to be selected for mining and validation of a new block. Coins sent for burning can no longer be returned. The more coins burned, the higher the chance of being selected as a validator. The system provides a reward for miners to cover the initial cost of the burned coins within a certain time.

One of the main drawbacks of this algorithm is that it does not really reduce energy consumption, since in most cases bitcoin coins mined with proof-of-work are used for burning.

Also, this algorithm lacks speed, and since it is not yet widely used, its efficiency and security still need to be tested.

 

Proof- of- Authority(PoA)

This algorithm is based on a reputation concept that uses a limited number of block validators. Blocks and transactions are verified by pre-approved participants with confirmed identities who act as system moderators. These system moderators validate blocks and transactions.

To be selected as a validator, a candidate must be trustworthy and have no criminal record. Reputation is a major investment here, as confidence in the identity of the validator ensures the security and reliability of the entire network.

It is clear that this approach, in addition to disclosing access to the identity of the validator, has another significant flaw - centralization. However, this factor makes PoA attractive for large enterprises and private use.

 

Delegated- Proof- of- Stake (DPoS)

DPoS consensus is achieved by voting of the delegates (third parties) authorized by the stakeholders, with voting power proportional to the number of coins each user holds. We can admit that the mechanism of this algorithm also relies on the reputation of voters. In case of suspicion of manipulation or rules violation, the community can replace the delegate at any time.

Delegated-proof-of-stake was designed to be more efficient than the proof-of- stake and proof-of-work consensus algorithms, especially in terms of speed of transaction processing.

One of the main problems with DPoS consensus is the possibility of collusion between delegates. This can lead to centralization of the network and increase vulnerability to attacks.

 

Proof-of-Elapsed-Time (PoET)

This  is an algorithm that prevents high resource usage and high energy consumption. Its concept was invented by Intel. 

Determining the node that gets the privilege of adding a block is a kind of lottery in which all participants of the network have equal opportunities. Each node in the blockchain generates a random wait period and goes asleep for this specified amount of time. The node with the shortest waiting time "wakes up" first and includes a new block into the chain, passing the necessary information to the entire network. The same process is then repeated to find the next block.

The cause of concern regarding PoET is a recently discovered vulnerability in Intel's technology, which serves as a foundation of the protocol. In addition, the reliance of consensus on third-party technologies - Intel, runs against the paradigm that cryptocurrencies are trying to implement with blockchain networks - removing the need for trust in intermediaries.

 

Proof-of-Capacity (PoC)

It allows mining equipment to use the available hard drive space on the network to determine mining privilege instead of using the computing power of the device. The larger the hard drive, the more possible solution values ​​can be stored on it, the higher is the chance for the miner to match the required hash value from his list, providing a better chance of achieving a reward.

Despite the fact that the mining process is part of this protocol, it is considered to be less energy consuming as there is no need for super powerful hardware.The disadvantage of this protocol is insufficient security and vulnerability to malware attacks.

This is far from a complete list of existing consensus algorithms.The listed algorithms are used quite infrequently. They have good potential, but there is still a lot of room for improvement. 

Even though proof-of-work is still the most commonly used algorithm today, Ethereum's recent move to proof-of-stake looks like a bit of a gold rush among companies looking for the perfect consensus algorithm to move the industry forward.

 

Ethereum after Merge – what have changed?

The Ethereum upgrade - one of the most impressive achievements in the blockchain industry, finally has been finished.

"And we finalized it!... Happy merge all! This is a big moment for the Ethereum ecosystem," said Ethereum co-founder Vitalik Buterin in a tweet. 

So, what has changed since Ethereum switched to Proof of Stake, except that ETH costs have dropped 20% in the last 7 days.

Since the preparations for the Merge in the Ethereum community have been ongoing for several years, it is currently unlikely that the event itself will cause significant changes in the overall development of Web-3.

There are prerequisites for a positive trend in the NFT segment - as many artists and users have had antipathy towards blockchain technologies due to the environmental impact of high energy consumption. By switching to proof-of-stake, Ethereum became much more eco- friendly. In fact, less than an hour after the Merge was completed, a user spent 36 ETH - about $60,000 - to mint the first NFT on the proof-of-stake network. It is a panda face image called "The Transition."

At the same time, the eco-aspect had a crushing effect on the miners, who appeared to be the most suffered side of the Merge. It is possible that some miners will choose to mine on another chain instead of selling the gear.

Of course, the biggest concern and criticism of the past-merge Ethereum is that it is moving toward centralization. Proof-of-stake depends on users buying, holding and staking large amounts of the network's cryptocurrency.

And while control of the Ethereum network will no longer be concentrated in the hands of a few publicly traded mining syndicates, critics insist that previous powerful players will simply be replaced by new ones. Lido, a kind of community of validators, controls over 30% of the stake on the Ethereum proof-of-stake chain. Coinbase, Kraken, and Binance - the three largest crypto exchanges - own another 30% stake in the network.

“Since the successful completion of the Merge, the majority of the blocks — somewhere around 40% or more — have been built by two addresses belonging to Lido and Coinbase. It isn’t ideal to see more than 40% of blocks being settled by two providers, particularly one that is a centralized service provider (Coinbase)”- explained Ryan Rasmussen, crypto research analyst.

Since decentralization is the main component of Web 3 concepts, this problem should be solved for the successful development of Ethereum and keeping ahead of competitors in the future.

Therefore, the Merge cannot be considered as the final transformation of Ethereum. The challenge is in keeping upgrading the network to adapt it to the decentralization concept and to increase security and speed.

As Buterin admitted, the Merge is just the beginning. "To me, the Merge just symbolizes the difference between early stage Ethereum, and the Ethereum we've always wanted to become," he said. "So let's go build out all of the other parts of this ecosystem and turn Ethereum into what we want it to be.

No matter how much the traditional financial sector resists the advance of cryptocurrencies, they will inevitably take a dominant position in the future. And there is no doubt that the evolving Ethereum is one of the main pillars of this industry.

 

Tips for successful development process for IoT team

Many sources describe the challenges and failures dealt with by the companies launching IoT projects. 

In this article, I would like to look into this aspect from the perspective of IoT app developers. 

According to the surveys taken by our IoT team that participated in IoT solutions development, we have faced the following issues that we will definitely take into account in the upcoming projects and that may be helpful to other developers:

Communication

Communication is the main factor of our teamwork. In the case of IoT project, it implies not only communication between teammates but communication and correct connection between IoT device and application.

It means that in the process of development, it’s extremely important for the development team to have an IoT device on hand. The IoT device is a must-have for the development team.

Collaboration

No other projects demand collaboration between various teams with a specific expertise. It's an absolutely reasonable approach because it’s impossible to have a satisfying level of expertise in each demanded technology. So, managing IoT project requires virtuoso communication and a clear understanding of which division is responsible for what, as well as a clear understanding of the tasks for each stage and each link in the chain with a clearly defined result of the work.

Project Business Goals

Thus, even in the MVP stage it is extremely important to understand business goals of your project. The main goal of the Internet of Things is to solve a business problem, but not to surprise techno geeks with a cool idea. You just need to concentrate on the problem, and the technology will follow. A clear idea will enable the whole team to find better solutions and build development processes.

Apps for Clients

And one of the most important points for the app development team - the Customer often thinks more about connected devices than about the application itself.

However, it is the application and data that create demand for connected devices, but not vice versa. It’s important to remember that even a tiny IoT project can unveil significant business opportunities. 

But one of the strongest indicators of IoT maturity is the use of analytics. Adding analytics revolutionizes the project. When you analyze IoT data, you get information that can be used to achieve business goals and objectives. So, don't miss out on opportunities on your way.

The issue of security is so obvious to everyone that it’s even not worth of being mentioned.

So, let’s make IoT development processes a happy way for incredible results.

 

The Merge – Ethereum is on the edge of grandiose changes!

The whole crypto community holds its breath waiting for the most grandiose event that is about to happen. Ethereum is on the edge of the most significant event in its history. A large-scale update called The Merge is planned for the Ethereum network, which involves changing the consensus algorithm from Proof-of-Work (PoW) to Proof-of-Stake (PoS).

The goal of the upgrade is to make this blockchain platform more scalable, secure and decentralized.

The actual activation of the Merge will happen with the "Paris" update, around September 15th, when the cumulative Terminal Total Difficulty (TTD) reaches 58750000000000000000000. TTD specifies the final Proof of Work block, after which the Proof of Stake consensus takes over.

With the change of consensus the era of mining will finally be closed. After the transition of Ethereum to PoS, miners in the network will be replaced by validators. They will confirm new transactions with the help of stakes and receive a reward in Ethereum coins (ETH) for this work.

To clearly understand what is happening, let's take a closer look at the basic concepts of the blockchain and figure out what consensus algorithms are and what are the pros and cons of PoW and PoS algorithms.

Each blockchain has its own protocol as a set of rules and actions, aimed at transferring data. Protocol is a critical component of Blockchain technologies that allow interaction of network nodes, transmission of data and block mining confirmation. A node is one of the many devices that runs the blockchain protocol software and usually stores a history of transactions. Nodes are connected to each other in a decentralized network. 

The consensus algorithm ensures that the rules of the protocol are followed and that all transactions are authentic. In other words, it is responsible for ensuring that all the nodes of the network agree with the adding of a new block. In this way, the consensus algorithm maintains the integrity and security of the network.

Proof-of-Work and Proof-of-Stake are currently the most used and well-known consensus algorithms. In fact, there are many more consensus algorithms, but for now we will consider only these two.

 

Proof-of-work (PoW)  is used widely in cryptocurrency mining, for validating transactions and mining new tokens. It is a mechanism that allows the decentralized Ethereum network to come to a consensus or agree on things like account balance or the order of transactions. This prevents users from “double spending” their coins and ensures that the Ethereum network is extremely difficult to hack or to fake.

To participate in a transaction, network members need to solve an arbitrary mathematical puzzle to find the hash and publicly prove the work done in order to avoid cheating the system.

A hash function is a function that converts an array of input data of arbitrary length into a bit string of a fixed length, performed by a certain algorithm. The conversion performed by the hash function is called hashing. The result of the conversion is called a hash.  The hash calculation process requires a lot of energy, which only increases as more miners join the network.

The first miner who is lucky to find a solution gets the right to add a block to the chain. Moreover, it gives the  ability to receive a reward for the work done and this is the main motivation for mining. All nodes are competing with each other, increasing the capacity of computing resources in order to be the very first node to receive a reward.

The main disadvantages of PoW:

  • Mining requires an enormous amount of energy. Nodes in the network are competing with each other, constantly performing complex calculations. But as a result, most of the work is done for nothing, since the reward goes to only one node. Bitcoin mining consumes more energy than countries like Switzerland or Greece;
  • Low speed and poor scalability. PoW blockchains are sorely lacking in speed. For example, the maximum throughput of the Bitcoin network is only 7-10 transactions per second. Such low rates are not suitable for mass and everyday use;
  • Users have to pay fees to miners for the verification of transactions. The more users in the network, the higher the commission. For the small transactions, commissions can even exceed the amount of the transfer itself;

 

Proof-of-Stake (PoS)  reduces the amount of computational work required to validate the blocks and transactions that keep the blockchain secure. Computing power (block validation) is replaced by staking. Staking is the process of blocking cryptocurrency assets in order to earn rewards or interest.

This algorithm gives the right to create the next block in the blockchain to the node that has more balance - the amount of resources, for example, cryptocurrency coins. The node does not receive a reward for the creation of the block. The reward is paid for the transaction. 

The main advantages of the PoS algorithm:

  • Low power consumption compared to PoW algorithms;
  • No special equipment needed;
  • High speed and scalability. For example, the transaction amount increases up to 2000 per second;
  • Low commissions;
  • Participation in the evolution of the project. Validators are taking part in voting on the future development of the project;

 

But aside from the fact that Proof-of-Stake is younger and less tested compared to Proof-of-Work, the biggest concern about the PoS algorithm is the risk of centralization. The validators, who have the larger amount of coins, will eventually control the majority of the network. Therefore, blockchain developers have been working on new versions of the PoS algorithm in recent years to solve this issue.

So, what will happen when the cumulative difficulty of Ethereum mining  finally exceeds the assigned TTD value? After crossing this milestone, there will be no more mining here. Network users (wallets) will stop accepting blocks from miners and will be waiting to get them from PoS validators.

The updated version of the protocol after the transition is called “Paris” and will continue the line of European capitals after: “Berlin” and “London”. On the evening of September 11, almost 84% of wallets were ready for the transition.

To become an Ethereum validator, you need to have at least 32 ETH as a deposit. In order to optimize the calculations, staking participants are divided into committees -  groups, the members of which are determined randomly. They include from 128 to 2048 validators. 

Time in Proof-of-Stake Ethereum is divided into slots (12 seconds) and epochs (32 slots). A randomly selected validator proposes blocks in each slot. This validator is responsible for creating a new block and sending it to nodes in the network. A committee of validators votes to agree on the validity of the block that was proposed. Committee members are shuffled after each epoch. 

By June 2022, the energy consumption of the Ethereum blockchain was 112 TWh per year. As a result of replacing mining with staking, this amount will decrease by 99.95%. This will not affect the operational processes of the protocol and the economics of projects, but it will allow Ethereum to avoid criticism from the “greens”. In addition, Ethereum will become more attractive to investors who take into account environmental issues. The developers claim that after PoS implementation, each node will require no more electricity than a regular PC.

The Ethereum roadmap includes the implementation of a technology called sharding, which is necessary to increase the scalability of the blockchain. Sharding is the division of a common database into fragments and distributed storage of information by nodes. This update will allow the Ethereum network to grow in line with the increasing load.

Sharding will reduce hardware requirements and allow the node to run on laptops and smartphones. The update is planned to be integrated in 2023, but the final date depends on the effectiveness of the Merge - transition to Proof-of-Stake.

There is no doubt that the Merger is one of the most significant events in the history of cryptocurrencies, which may have far-reaching consequences, from unpredictable fluctuations in the price of Ether to global changes in the crypto industry at all.

 

Are you ready for the Metaverse?

Introduction

The metaverse is an online world, a virtual space where the digital version of you performs virtual activities similar to those performed in the real world, such as communicating, building, owning digital real estate, using digital currency that can be converted into real transactions.

Matthew Ball - modern ideologist of the metaverse, identifies its seven main features:

  • Endless existence - it never stops, resets or ends;
  • An unlimited number of audiences can connect to it simultaneously. Anyone can connect to the metaverse at any time and participate in its life on an equal basis;
  • A functioning economic infrastructure and reward system for virtual work that brings value recognized by others. Income or earnings that can be spent and invested;
  • Compatibility of all data from different digital worlds;
  • Independence from external factors - it exists in real time, although developers can create and schedule events in the metaverse;
  • Bridges the digital and physical worlds, private and public networks, open and closed platformes;
  • Filled with content created by variety of individual contributors or groups of users;

Though currently existing gaming platforms are the best examples of the metaverse, there are also a huge number of opportunities for the businesses. 

Today, many companies have integrated VR technology into their manufacturing processes. Ford, for example, uses virtual reality to allow employees from different countries to work on car design simultaneously. VR helps make processes faster and cheaper, and does not require physical materials. 

VR technologies can be useful not only in production, but also in office work. Imagine being able to work remotely but still hold meetings and negotiations with colleagues and business partners in a virtual space? Already existing platforms glue.work or Mesh, which Microsoft introduced in November 2021, allow several people to communicate in one virtual space. 

The participants can interact with each other and even with 3D objects. So far, this is possible with digital avatars, but later Microsoft plans to develop technologies that will allow people to appear in a virtual environment through their own holograms, which will help to express the real emotions and communicate better.

Blockchain is also an extremely important part of the metaverse world. It is the economic mechanism of the future. Blockchain will tie each user's data and money to their digital account and allow them to use purchased products throughout the metaverse.

The rapid development of the metaverse will bring benefits in many areas of our lives - education, entertainment, the arts, health care, and others. It opens up important business opportunities, even more impressive than after the digital transformation that has turned most companies into online businesses in recent few years.

The metaverse is creating a new economy that can significantly increase brand awareness, deliver immersive customer experiences, and improve communications. 

The fashion industry is already using these tools. Fashion shows where the model walks not on the catwalk, but in the virtual world, have already taken place.

Did you know that sports brand Nike has its own metaverse - Nikeland, where you can style your avatar buying exclusive digital shoes, clothes and accessories? Adidas has already released a mixed collection for both the real and the digital world. The items of Adidas “Into the Metaverse” collection were sold as NFTs.

Celebrities are also starting to dabble in the NFT world. One example is Quentin Tarantino, who decided to sell part of the Pulp Fiction movie-script, which was not included in the final film, as NFT.

Digital giants like Google and Microsoft are now investing heavily in creating their own metaverses, and Facebook has recently been renamed to Meta. According to Zuckerberg, users of the new metaverse will not be tied to any particular social network any more. People will be able not only to view the content, but literally to be in it.

The ultimate goal of the metaverse as a product is to recreate the real world with all the feelings and processes. Companies want people to be able to "live" in the metaverse - to hold meetings there, watch shows, play games and make new friends. 

However, besides the fans of the metaverse, there are those who believe that it has a dark side.

 

Privacy concerns

There is no doubt that technologies that already track our behavior and preferences will be used in the metaverse. Moreover, these technologies will become much more aggressive and intense.

By connecting wearable devices, necessary for immersive virtual experiences, we will allow companies to track our physical reactions and emotions. They will collect and use huge amounts of data for marketing and other purposes.

For example, the eye-tracking technology that VR headsets provide will make it possible to collect information about where and for how long we are looking during metaverse experiences. Such issues raise significant concerns for those who care about privacy.

 

Health Concerns

Returning to the real world after an amazing and impressive time spent in virtual reality can cause sadness or even depression. Adults and children already suffer from gaming addiction. As the metaverse expands, more and more people will suffer from this so-called “digital hangover”.

 

Legislation issues

Is it possible to control everything that happens in a vast universe comparable to the real world? The first allegations of sex parties, harassment and meetings of extremist organizations in the metaverse have already surfaced. We must be prepared for an increase in such cases. 

Second, who and how will evaluate the ethical and moral components of the content? Can a virtual act be a crime? Who will establish the rules for what is allowed and what is forbidden? The metaverse will cause regulatory problems and will create new blind spots in legislation.

An issue that can rather be attributed to the challenges that the metaverse is already facing on the way to its expansion is facilitating access to virtual reality technologies for the general public. One of the reasons is the high price of VR headsets. Now they are becoming more affordable and easier to use and you no longer need a powerful computer to immerse yourself in VR. 

Beyond that, a technological breakthrough regarding the headsets should be expected. So far, they are quite massive and can hardly be called mobile. Probably portable and convenient glasses that allow you to enter the metaworld from anywhere, will be produced soon. Three major new releases coming this year: the Sony PlayStation VR2, a Meta headset "Project Cambria" and Apple's AR glasses, which will most likely be similar to smart watches and will display notifications, calls, and information in augmented reality.

Technologies aim to enable full immersion in the virtual world, where objects can be touched and felt. South Korean company bHapticshas presented gloves that will not only reproduce your movements in VR, but will also allow you to feel objects. Sensitivity is provided by a system of inflated or deflated pads that apply pressure to different parts of the hand, simulating the sense of touch. bHabtics also has a special suit that converts the sound effects of explosions, shots, etc. into tactile feedback using special built-in motors. In the scary game Phasmophobia, for example, you can already feel the touch of ghosts.

Why is such a hype around the metaverse happening just now, though for the first time this term was mentioned back in 1992 by a science fiction writer Neil Stevenson in his  novel Snow Crash?  

Perhaps it was influenced by Covid-19 pandemic, when people were isolated for a long time: public events like concerts, visits to museums, theaters, cinemas and clubs were under restrictions. However, the human nature to interact, as well as the need for self-expression and recognition, have significantly increased the value of digital presence on virtual platforms.

Since the metaverse includes all the main components of the new stage of Internet development, such as NFT, blockchain, cryptocurrencies and DeFi, there is an opinion that Web-3 is the Metaverse. 

Based on research, Gartner predicts that people will spend at least an hour a day in the metaverse by 2025.

Are you already metaverse-ready?

 

 

Inmost continues receiving new 5-star Reviews on Clutch for promising App Development from Dyop

Introduction 

I suppose we all can admit that vision is one of the most fundamental senses and a powerful means of interfacing with the world. This is the main reason why we have to draw our attention to every deformity or change in it. Early detection of persistent eye-related diseases is critical in preventing the further deterioration of vision and nurturing overall eye-health.

In addition, such a vital topic connected to our healthcare ought to be implemented into the era of digitalization smoothly but effectively. So what is the solution? 

The mobile apps market is developing almost every minute. Switching to the digital becomes a necessity which encourages more and more people to get used to a new reality which is more comfortable and rapid. Current research of the US digital market  claims about impetuous growth in healthcare scope during 2022 which will lead to a common tendency all over the world.

This is the reason why the growing demand for mobile apps in such a highly responsible niche as healthcare becomes extremely complex. You are literally in an urge to find well-grounded developers who will provide the patients with trust, quality and comfort. 

Fortunately, Inmost stands exactly for that kind of software provider. We’re proud to welcome back our clients with new projects and - are happy to continue cooperation for many years.

 

About Dyop Vision Associate 

Let’s dive into the history of Dyop Vision Associate. This company is located in Atlanta, Georgia, headed by Alan Hytowitz. It has spent 14 years creating Dyop Test, which is used to measure visual acuity.
Moreover, the Dyop is considered as a revolutionary way to measure 21st century vision. And we’ll explain to you why. 

As a vision scientist, Allan discovered that the 1862 letter-based tests used globally by optometrists and ophthalmologists are actually making people blinder in the 21st century. It literally makes eyeglass lenses exceedingly powerful, contributing to the major cause of the global epidemic of myopia.

In simple terms, Dyop, a short for dynamic optotype, is a spinning segmented ring used as a visual target (optotype). That kinetic retinal stimulus is essential for vision as it helps to avoid depletion of the photoreceptor response and can be used for measuring acuity and refractions based on the Dyop angular arc width (diameter).

Originally, the client had a working version of the eye test algorithm in Windows. However, he would like to build a Dyop application that can run on any OS with any browser on any device. The core goal is to allow a doctor to login to the system and test a patient’s eyes remotely, or patients to have their eyes checked remotely and then, if problems are identified, book an appointment with the closest doctor for an actual physical eye test.
Starting from this point, Inmost was ready to help with implementation in this bold challenge. 

 

What we delivered 

As a  software development partner, we decided to implement PWA technology. WebSocket protocol to establish real-time connection between the doctor’s and the patient’s devices.

Dyop app was designed considering all the risks and insecurities connected with clients. As we’ve already mentioned, healthcare digital platforms have the main task - to supply people with confidence and trust. We provide our clients with mobile and web versions which help them easily to check their vision and get feedback. The app is both relevant for clients and doctors as well. 

The process of eye testing is very simple. The patient looks with each eye separately at the rotating rings and should answer which ring is rotating: left or right, or it is not clear. There are 3 main buttons for these options to control the process.

 

Key development outcomes

Here are a few key features delivered by Inmost. We’ve implemented 3 types of eye testing:

  • The doctor’s  control of the test sitting in the same room with the patient.
  • Doctor’s control of the test remotely.
  • Patient’s self-test. 

 

What did client say

We’d like to share our excitement with you about a 5-star review on Clutch - a B2B ratings and reviews platform. 

 
Description of imgage.
 

 

Alan Hytowitz was enthusiastic about working with us and shared his impression of work describing every detail that made him confident in cooperating with Inmost. 

“They've been very helpful in terms of logistics and planning, what needs to be done. Inmost seems to be extremely competent, pleasant seem to have understood exactly what we needed. And even though I'm not really the guy who is the software person, I'm more the mad scientist vision test. I really like the folks at the Inmost.

To summarize, we’ve been extremely captivated to produce such an innovative project and participate in the improvement of the healthcare sector. Inmost is looking forward to starting new collaborations to make our future better. 

To find out more about our development process, check out the full case study on Clutch. 

A brief overview of communication platforms development – from 1988 till now

Introduction

Communication platforms have come a long way turning into the apps we can’t do without any longer, such as WhatsApp or Skype. Now, it's hard to imagine chatting  without an opportunity to send a GIF or a voice message. However, these features were not even conceived at the dawn of first text terminals’ development.

Infancy

Originally, the Internet was designed to exchange information. Communication platforms' features included sharing text messages and files, and special protocols and programs were developed to provide users with both of these functions. Thus, one of the earliest real-time messaging protocols and a corresponding program were called Internet Relay Chat (IRC). IRC existed far back in 1988 and ran on a text terminal.

Internet Relay Chat

The ICQ era

After the Graphical User Interface (GUI) took over the world in 1996, four high school students from Tel Aviv created the “first edition” of a messenger, known today as the ICQ program (in consonance with “I seek you” catchword).

ICQ chat

The ICQ client had a graphical interface and could send files, besides the text messages, and display graphical content in a scalable configuration in the program window.

ICQ was followed by plenty of replicas (inspired by its success), many of them attempted to register their client in the system. Since the ICQ protocol was never published, other companies simply tried to “hack” it.  Therefore,  the ICQ developers in order to protect their messenger, frequently changed the protocol for interacting with the server, temporarily setting all the third-party clients out of service.

At its peak, ICQ accounted for about 20 million users worldwide - an insane amount for that time! Such success motivated market leaders like Microsoft, AOL, and Yahoo to develop similar programs. And this whole software class got its name - Instant Messaging (IM).

Metamorphosis

In 1999, the very first attempt to standardize protocol was made. The Jabber - open-source alternative to ICQ appeared. It had a function of sending messages and files using its own protocol, had client parts for Linux and Windows, and - the most important thing – had gateways to all popular messengers of that time, including ICQ.

In 2004, the XMPP protocol (“eXtensible Messaging and Presence Protocol”), based on Jabber, was created.

XMPP architecture scheme

Although XMPP was proclaimed to be a standard for messenger development, it was too knotty. It became even more awkward when a need to “squeeze” new features into the protocol architecture appeared. Now, there are several open-source implementations of XMPP servers and clients.

Skype age

In 2003 the market was blown up by Skype. It was a new IM application that had a “killer feature” – voice communication. The ability to call on a regular phone number appeared almost immediately – in version 1.2. It was a true revolution. In 2010, developers added an option for group video conferencing. The number of users grew very quickly, and by the end of the year, Skype accounted for about 667 million users.

Present time

Since 2010 many IM programs have been introduced to the general public. Between 2011 and 2014, IM programs appeared to be among the most popular ideas for startups. However, it became clear that the development of scalable applications able to run 24/ 7 was a very complicated task.

Enthusiasm is necessary but yet not enough to create an appropriate program. Firstly, the development process requires a tremendous amount of resources for quality assurance, usually not affordable for small companies. Secondly, the “era” of deployment applications for servers in data centers with “real” hardware has passed. Modern messengers have to serve millions of users, and no startup can afford to keep this amount of hardware in reserve.

Today, having a smartphone is almost a must for everyone. This fact led software developers to come up with a simple idea of linking IM-application accounts to phone numbers and to using them for authorization purposes. This idea has been implemented in all modern messengers, such as Telegram, Viber, and WhatsApp. Though they have desktop client applications, you can’t sign up without a phone number.

Telegram has implemented another brilliant idea - channels. A channel is a one-direction communication. The channel owner can post messages, while other users (subscribers) have read-only option, with no possibility to reply or delete messages. However, recent versions of Telegram have allowed subscribers to leave comments on posts.

That's all for now about the messengers’ history.  For sure, it will continue, and we’ll see even more cool features. Some of them are even hard to imagine. And once they appear, Inmost will write about them in detail.

Title

1. Developer Roadmap

The developer roadmap is one of the popular useful repositories that you should know. It has over 153.000 GitHub stars, which means that it’s very useful.

This repository contains a lot of charts and images demonstrating the paths you can take and the technologies you need depending on what type of developer you want to become (backend, frontend, etc). So it’s a web developer’s roadmap that helps you choose your path.

Text text

Some bullet template:
    • Bullet title
    • Long bullet title
    • list text
    • text list
 
Description of imgage.
Some long image title

1. Developer Roadmap

The developer roadmap is one of the popular useful repositories that you should know. It has over 153.000 GitHub stars, which means that it’s very useful.

This repository contains a lot of charts and images demonstrating the paths you can take and the technologies you need depending on what type of developer you want to become (backend, frontend, etc). So it’s a web developer’s roadmap that helps you choose your path.

Some bullet template:
  • Bullet title
  • Long bullet title

A software kit for medical specialists to create customized apps and configure them without programming

Our client is a global leader in telemedicine technologies & services, with centers in Israel and Germany, with about 3 million medical interactions per year. The aim of this project was to build a software kit that can be purchased under license and enables anyone who has the medical-technical know-how to launch their own patient apps - with no need for certification as a medical device manufacturer.

Telemedicine application, aimed to strengthen trust to remote medical treatment

Remote medicine provides quality medical services for the countries where life standards are not at the highest level. But the establishment of a trusting relationship between a doctor and a patient at a distance is one of its core issues. Application that guarantees safe and reliable medical records storage significantly elevates the trust level of patients.

NFT based solution assigning ownership to the game artifacts and enabling them to be sold on any NFT marketplace

Our client is launching an MMORPG game and involves high-quality designers to create artifacts` graphical representation, which is supposed to have some artistic value. We implemented his idea to add a commercial component and enlarge the game community by creating a solution for selling game artifacts among existing players and potential users.

A rental application that functions as an intermediary between landlords and tenants

A common problem of the daily accommodation booking services is conflicts between tenants and landlords. Our approach in developing the rent application reduces the presence of human factor, makes payment and renting processes transparent and eliminates conflict situations.

A chatbot testing the English level of new students

A company that offers distance learning of English approached us to develop a chatbot with the main function to test the language level of students who want to en roll in the course. Usually this task was done by teachers, and the company had to spend a considerable amount of money on it. We have found a solution replacing expensive teacher time with an automated testing system to check the knowledge level of the new students of the course.

Application for mowing lawns without human participation

The customer wants to create an innovative solution that changes the concept of the roadside lawn maintenance process. The main idea of the project is to cut grass without human participation by the development of a convenient cross-platform mobile application to set up mower and remote control of the grass-mowing process.

Application remotely operating the snow-melting equipment

In the regions with a huge snow cover in winter, roofs are usually equipped by special metal tapes heated by electricity to melt the snow. In case of snow compression, the system fails causing extremely high energy costs for the owner with no actual benefit. We have found a solution to instantly identify system failure and manage snow-melting equipment remotely.

Application for learning English with automatic listening, speaking, grammar and vocabulary training options

Nowadays, more and more educational institutions provide their services online. This trend existed before the COVID-19, but since the pandemic began, it has increased significantly. We were approached by an English teaching company to create a distance learning system that allows to practice all the necessary skills for learning English so that most of the tests could be conducted automatically.

A social network for couples providing all the necessary services and information for wedding organization

The wedding organization requires careful planning and coordination of a large number of services, such as photographers, florists, confectioners, restaurateurs, wedding venue owners and others, and it is also necessary to invite guests. We created a social network that combines all the essential services for a couple to organize and celebrate the wedding.

Application for telemedicine service – social network for doctors and patients

There are many countries where quality medical services are available only in large cities but unreachable for residents of the provinces. Remote medicine might become a compromise solution to this problem. The main idea of ​​this project is to create a specialized social network that will connect patients and doctors and will enable doctors to provide qualified assistance to patients remotely.

Car-sharing application with adaptive rent rate depending on client’s driving manner

According to worldwide statistics, speeding is the main reason for car damage and accidents. Сar-sharing company owner aimed to motivate careful driving and extend rental cars' lifetime. The solution involves estimation of driving behavior and rewarding responsible drivers.

Inmost chooses the development of software solutions based on Amazon Chime as the main business growth trajectory

Introduction

The modern information technology outsourcing market provides a wide range of opportunities for all participants. We - Inmost team - value software development trends and, at the same time, are looking for the ways to apply them for our customers` benefit and business success.

With regard to all of the successfully delivered projects, so far, we’ve decided to choose several specific expertise fields as potential growth points for the company. This approach allows us to focus operational efforts, including further training and certification of employees, on proficiency growth in the selected areas to meet our clients' needs even better.

One of the core Inmost expertise is CPaaS platforms (Communication Platform as a Service), and in particular - Amazon Chime communications service.

Why Communication Platforms as a Service?

We believe communication platforms are already and will keep being one of the most demanded types of Information Technology Services. According to International Data Corporation (IDC) forecasts, the global communication platforms market was expected to reach $4.56 billion in 2019 and grow 39% - by 2023.

Moreover, the estimates were made before the world economy faced changes due to the COVID pandemic. In 2019 no one could have predicted the increasing number of remote employees during 2020-2021. And, therefore, no one could have predicted the enormous growth of demand for high-quality video conferencing and other interaction tools, e.g. instant desktop sharing.

In May 2021 IDC adjusted its original estimate. Now, experts predict that the market will cost $92 billion by 2023. The recent market overview published on nojitter.com proves that the communication platforms market is snowballing so all the estimations of its size don't make much sense.

Inmost successfully provides projects for customers with the implementation of different communication platforms. And AWS Chime has impressively improved the development process and business outcomes. 

We’ve chosen Amazon Chime as a solution for communication platforms as Amazon is a leading provider of cloud technologies. This global tech giant invests immense resources in development, debugging, and testing of its solutions. With Amazon’s platform, we can stay focused on customers’ needs instead of overcoming already solved problems.

Amazon gained trust and loyalty all over the globe. At the moment, Amazon Global Infrastructure covers 25 Regions, each having several independent zones (so-called “Availability Zones”). As a result, it provides reliable responses to requests quite instantly.

Also, Amazon has so-called “Wavelength Zones”. These are AWS infrastructure deployments, where AWS storage and compute services are integrated into 5G providers’ networks. No alternative to it - this is an essential point for mobile applications development.

Amazon launched Chime in 2017. Since then, the service has evolved significantly and numerous new features were added. No doubt, that this secure, reliable, and affordable platform is the way to the future. And Inmost is the right software development company to provide solutions based on Amazon Chime.

Inmost gets a 5-Star Review on Clutch for Ongoing Mobile App Development from Cellar Ventures

Introduction

The mobile apps market has skyrocketed over the last decade. While in 2014 its global revenue amounted to roughly $100 billion, now the number is predicted to be close to $700 billion. As more and more consumers switch to digital, the importance of providing customers with first-class UX becomes more vital for companies than ever.

At the same time, due to the growing demand for mobile apps, the SaaS (Software as a Service) niche becomes extremely overheated by businesses struggling to find reliable developers to establish long-term commercial relationships.

Luckily, Inmost stands exactly for that kind of software provider. We’re proud to welcome back our clients with new projects and - are happy to continue cooperation for many years.

About Cellar Ventures, Inc

Cellar Ventures, Inc is a California-based company that operates as an intermediary agency between wineries and their customers on the global market. The company helps cellar owners to export wines via convenient channels and sell bottles directly to end-buyers all over the world.

In 2020, Cellar contacted Inmost with an idea to create a digital solution that would enhance the global wine trade and make it easier for both producers and consumers to sell wine abroad. The company needed an app that people can refer to and find all information about a particular wine, including year, vineyards, grapes, etc. The solution needed to be perfect towards UI/UX parameters - to compete and take the market share from other apps in the niche.

What we delivered

As a  software development partner, Inmost team took this case and applied a “full-cycle” approach. We were involved  at every stage of product development - from market research, including competitors` Apps overview, to the development and post-release testing.

CELLR app was designed to be a reference point for wine lovers. They can not only search for wine-related information, but purchase bottles as well as some exclusive wines from private collections.

The App’s users can create wish lists, receive recommendations, and even trade with other cellar owners. Wine sellers, on the other hand, can use App to take inventory of their cellars' stocks and set prices for bottles.

Key development outcomes

We’ve used an advanced tech stack to make CELLR App convenient and user-friendly. Here are just a few key features delivered by Inmost:

  • an option to recognize any wine label
  • a possibility to conveniently buy a bottle of wine or put it up for sale via the smartphone
  • saving option of selected wines to users profiles
  • convenient search & filter functions
  • a full database of wines.

What did the Client say?

We at Inmost were incredibly excited and proud to receive our first 5-star review on Clutch - a B2B ratings and reviews platform.

 
Description of imgage.

 

In his conversation with Clutch, Jeffrey Ishmael, CEO of Cellar Ventures, shared the details of our ongoing collaboration. He explained that his company needed a mobile app for their business and told about the outstanding results that were achieved in cooperation with the Inmost team.

“We set out to work with Inmost from the beginning, to develop a mobile app for our new wine business, because we want to connect consumers directly to consumer wine sales… From a metrics perspective, we’ve been working together quite efficiently.

Where the company has been effective is in the stability of their own team. We haven’t had a lot of turnover on most of the staff that’s been working on the project, because Inmost does a lot to recruit, train, and motivate their team.

They’ve been very complete with their execution and delivery.”

To find out more about our development process, check out the full case study on Clutch.

Peer-to-peer trading platform that directly connects wine enthusiasts and cellar owners

At the end of 2019, Cellar company approached Inmost with an idea to create a digital solution that would enhance the global wine trade and make it easier for both producers and consumers to sell bottles of wine across the borders. The company needed an app that could be used to find all information about a particular wine, including vintage year, grape variety, appellation, and producer. The solution needed to be perfect in terms of UI/UX parameters - to compete and take market share from other apps in the niche.