Private blockchain technology – a game-changer for healthcare data management

Private blockchain technology in healthcare data management

While public blockchains such as Bitcoin and Ethereum have gained attention for their use in cryptocurrency, private blockchains have earned popularity in industries where security and privacy are critical, such as healthcare.

In recent years, blockchain technology has emerged as a potential solution to many challenges facing healthcare data management. Private blockchains are permissioned networks in which access is limited only to authorized parties. This makes them a perfect solution for healthcare, where data privacy and security are significant issues. 

Private blockchain technology can improve data management, sharing and potentially revolutionize how healthcare providers, patients, and other stakeholders interact.

 

Current challenges in healthcare data management

The healthcare industry currently faces several challenges in managing data effectively. Some of the most pressing issues include:

  • Data security and privacy. With the increasing amount of sensitive patient data being stored and transmitted digitally, it is critical to provide security and privacy. Healthcare organizations must comply with regulatory requirements such as HIPAA, which imposes strict standards for safeguarding patient information. Despite these regulations, data breaches are still common in the healthcare industry, with potential consequences ranging from financial losses to reputational damage.
  • Interoperability. Healthcare providers often use different systems to manage patient data, which can lead to data inconsistencies and hinder communication between different parties. The lack of interoperability between different systems can result in errors and redundancies in care delivery.
  • Data sharing and communication. Sharing patient data among healthcare providers is essential for delivering high-quality care, but most current systems for sharing data often need to be more efficient and efficient. The process of requesting and receiving data can take days or even weeks, and the data may not be up-to-date or accurate. This can lead to delays in care delivery, misdiagnoses, and other adverse outcomes.

These challenges highlight the need for innovative solutions to improve healthcare data management. Private blockchain has the potential to address these challenges by providing a secure, transparent, and efficient platform for managing and sharing data.

 

How can private blockchain solve these challenges?

Private blockchain uses cryptographic techniques to ensure that data stored on the network is secure and tamper-proof. Using a distributed ledger ensures that the data is transparent and can be accessed by authorized parties in real time. This can improve the security and privacy of patient data, as well as enhance trust and confidence in the healthcare system. 

Blockchain technology can help overcome the interoperability challenges by establishing shared protocols and standards that enable different systems to communicate. This can help reduce errors, redundancies, and inefficiencies in care delivery, as well as improve the accuracy and completeness of patient data.

A private blockchain can provide a secure and efficient platform for sharing patient data among different parties, including healthcare providers, patients, insurance companies, and researchers. This can help improve the speed and accuracy of diagnosis and treatment and facilitate collaboration and innovation in healthcare.

 

Examples of using private blockchain in healthcare

Private blockchain technology has already been adopted by various healthcare organizations for different purposes. Below are some examples of the use of private blockchain in healthcare:

Using a private blockchain for electronic medical records (EMR) offers several advantages over traditional EMR systems. It can increase the security and confidentiality of patient data while enabling safe and efficient information exchange between medical facilities.

One of the key benefits of a private blockchain for EMR is that it can provide a tamper-proof and transparent record of all patient data transactions. Every transaction, such as adding a new record or update, is captured on the blockchain in a secure and immutable way. This can help prevent data tampering, loss or leakage, common issues with traditional EMR systems.

It also increases patient control over their data, enabling them to monitor who has access to the data and to grant or revoke access to their medical records when needed. This could increase patients' trust in the healthcare system.

A private blockchain can also be used for managing clinical trials and research data. The technology can facilitate secure and transparent data sharing between different stakeholders involved in the research process, including patients, researchers, and regulatory bodies. A private blockchain can also help ensure data integrity, enhance data privacy, and reduce the risk of data manipulation or fraud.

A private blockchain can improve supply chain management and drug tracking in healthcare by enabling secure and transparent monitoring of drugs from manufacturers to patients. Using private blockchain, stakeholders can track the movement of drugs, verify their authenticity, and ensure that they are not tampered with. This can help reduce the risk of counterfeit drugs entering the supply chain, enhance patient safety, and improve regulatory compliance.

 

The future of healthcare with private blockchain technology benefits.

The use of private blockchain technology can significantly improve the efficiency of healthcare systems, leading to cost savings for healthcare providers. One of the primary reasons for this is its ability to automate many administrative tasks, reducing the need for manual intervention and streamlining operations. Using smart contracts and digital identities, blockchain can automate tasks such as patient identification, medical record management, claims processing, and supply chain management, among others.

Delivering secure and efficient data sharing, blockchain can enable providers to create new partnerships and collaborations that were previously impossible. For example, healthcare providers can use blockchain to securely share patient data with research institutions and pharmaceutical companies, leading to the development of new treatments and therapies.

Furthermore, blockchain technology can also facilitate the creation of decentralized health platforms that incentivize patients to engage in pro-health activities. By using blockchain-based tokens or rewards, patients can earn incentives for participating in clinical trials, sharing their health data, or engaging in other health-related activities. This can lead to improved patient outcomes and new revenue streams for healthcare providers.

Overall, the potential for new business models and revenue streams created by blockchain technology in healthcare is vast, and the industry is only beginning to scratch the surface of what is possible. By leveraging the benefits of blockchain, healthcare providers can improve patient outcomes, reduce costs, and create new opportunities for revenue generation and innovation.

 

Inmost team experience.

At Inmost, we understand that trust is essential to healthcare, especially regarding telehealth services. Because patients don't have the opportunity to see a doctor in person, it can be challenging to build trust with the healthcare providers they are communicating with via screen.

To address this challenge for our client - a telehealth provider, our team implemented blockchain technology on a Hyperledger Fabric platform for patient medical records. Our solution provides secure storage and immutability for all patients' personal information, from registration data and diagnosis to screening and examination results and prescribed treatments.

This implementation has had a positive impact on our client's services. Now they can provide their patients with more comprehensive and evidence-based care by accessing a complete medical history. In addition, patients can be assured that their information is secure and cannot be tampered with

More information about this case: https://inmost.pro/cases/clinic-reliable-storage/

 

Alternative for LoRaWAN. Yes, it exists.

Introduction. What is Symphony Link

Symphony Link is a wireless LPWA (Low Power Wide Area) system developed by Link Labs to overcome limitations of the LoRaWAN system, such as limited capacity. As the number of devices on the network increases, the network's performance may suffer, also not be enough level of security for some types of applications. 

 It is built on LoRa CSS physical layer technology. It is based on a patented technology called Symphony Link Spread Spectrum (SLSS), which is optimized for IoT applications.

Following are the features of Symphony Link technology:

  • It is the standard protocol developed by Link Labs targeted to meet the LoRa range with higher performance;
  • Protocol used is Synchronous, unlike ALOHA;
  • It uses a channel size of 125 kHz;
  • It offers a high sensitivity of about -137 dBm;
  • It uses both unlicensed and licensed frequency spectrum from 902-928 MHz in the US, and 863-870 MHz in Europe;
  • It operates without the network server. The devices communicate using a low-power, wide-area network (LPWAN) protocol. Unlike traditional network server-based communication protocols, Symphony Link devices do not rely on a centralized server to communicate with each other;

Symphony Link devices use a technique called Direct Sequence Spread Spectrum (DSSS) to establish communication. DSSS is a technique that allows multiple devices to share the same frequency channel by spreading the signal over a wide range of frequencies. Each device in the network has a unique "chirp" pattern that allows it to communicate with other devices in the network without interfering with other signals.

  • Symphony Link gateway is an 8 channel sub-GHz base station;
  • The platform also includes features such as over-the-air (OTA) firmware updates, remote device management, and support for multiple application protocols.

Symphony Link is designed to support a wide range of IoT applications, including industrial automation, smart cities, agriculture, asset tracking, and environmental monitoring. The platform is highly scalable and can support networks with thousands or even millions of devices, making it a popular choice for large-scale IoT deployments. 

Let’s have a look at the main advantages this protocol has.


Advantages of Symphony Link:

  • Long-range connectivity: Symphony Link is optimized for long-range communication and can provide connectivity over several kilometers, even in harsh environments;
  • Low-power consumption: Symphony Link uses a low-power wireless technology that enables IoT devices to operate on battery power for several years, reducing maintenance costs and enabling remote and mobile deployments;
  • High reliability: Symphony Link uses a robust and reliable communication protocol that can withstand interference, noise, and other sources of signal degradation, ensuring high data transmission rates and low packet loss;
  • Secure communication: Symphony Link uses advanced encryption and authentication mechanisms to secure communication between devices and gateways, protecting sensitive data and ensuring compliance with privacy and security regulations;
  • Scalability: Symphony Link uses a unique MAC layer protocol, called the Symphony Link Protocol (SLP), that enables highly scalable networks by dynamically managing each device's bandwidth and time-slot allocation. This allows Symphony Link to support thousands of devices on a single network without compromising network performance; 
  • Flexibility: Symphony Link supports multiple application protocols, including MQTT, CoAP, and HTTP, allowing developers to choose the protocol that best suits their needs.

As it was mentioned above, the Symphony link was created to overcome the limitations of the LoRaWAN system. Let’s compare these two protocols to be sure the target was achieved.

 

Comparison Symphony Link and LoRaWAN

  • Symphony Link and LoRaWAN are wireless communication protocols designed for Internet of Things (IoT) applications requiring long-range, low-power, and reliable connectivity. While they share many similarities, let’s have a look at the differences.
  • High-density environments: Symphony Link is designed to work in high-density environments where multiple devices compete for wireless connectivity. It uses a protocol that can handle a higher volume of data and connections, making it more suitable for applications with many IoT devices in a small area.

LoRaWAN is also a low-power, wide-area network (LPWAN) protocol that can operate in high-density environments with multiple devices competing for wireless connectivity. However, LoRaWAN uses a form of Chirp Spread Spectrum (CSS). 

Both modulation techniques are designed to allow multiple devices to share the same frequency channel, but they use different methods to achieve this.

Security: Symphony Link provides advanced security features, including end-to-end encryption, authentication, and access control, which make it more secure than LoRaWAN (LoRaWAN uses AES encryption). It is suitable for applications that require high levels of security, such as financial transactions or healthcare applications. 

  • Scalability: Symphony Link is designed to support networks with thousands or even millions of devices, making it a good choice for large-scale IoT deployments. In contrast, LoRaWAN is typically used in smaller-scale networks.
  • Integration: Symphony Link supports a range of application protocols, including MQTT, CoAP, and HTTP, making it easier to integrate with existing systems and platforms. LoRaWAN has a more limited range of application protocols.

In summary, Symphony Link may be preferred over LoRaWAN in applications that require high-density connectivity, advanced security features, scalability, interference reduction, or integration with existing systems and platforms. However, the choice of the protocol will ultimately depend on the specific requirements of the application and the available resources.

For an unbiased opinion, it is impossible not to mention the weakness.

 

The followings are the drawbacks or disadvantages of Symphony Link:

  • It requires LoRa chipsets and symphony link software which adds dependency. If a system is designed to use the Symphony Link protocol, it will depend on the Symphony Link software and hardware components. Because the devices used in the system will need to be compatible with the Symphony Link protocol and will require the Symphony Link software to operate.

Meanwhile, LoRaWAN devices can communicate with LoRaWAN gateways from different manufacturers, providing a high degree of interoperability and flexibility.

  • It is being used by a small community of users.

In general, Symphony Link as the LoRaWAN competitor, is worthy; however, it is less popular than LoRaWAN.

LoRaWAN is an open standard maintained by the LoRa Alliance. This means that LoRaWAN has a larger ecosystem of devices and vendors that support the protocol, while Symphony Link is more tightly controlled by a single vendor (Link Labs).

LoRaWAN has been available for several years and has achieved a high level of market penetration in many countries. In contrast, Symphony Link is a relatively new technology and has yet to be as widely adopted. This can make it more difficult for developers and businesses to find components and devices that are compatible with Symphony Link, which can limit its popularity.

LoRaWAN has a well-established ecosystem, a larger community of developers, and is more widely adopted, making it a popular choice for many IoT applications.

Overall, Symphony Link has its strengths and advantages, but its popularity and adoption are still growing, and it may take time for it to become as widely adopted as LoRaWAN.

Inmost eager  to check all advantages of Symphony Link in practice and make our contribution to the popularity of this promising protocol.

 

Unlocking the potential of NLP with AWS services

The increased interest in NLP

Although some users do not share the massive enthusiasm for the ChatGPT, calling it a creativity killer, the ability to accelerate and optimize many processes has made numerous entrepreneurs and businesses increasingly interested in NLP tools involving Artificial Intelligence and Machine Learning. 

 

So, what is NLP? 

NLP stands for Natural Language Processing, a field of study within computer science and artificial intelligence (AI) that focuses on the interaction between computers and human language. NLP involves the development of algorithms and techniques that allow computers to analyze, understand, and generate human language in a way that is very similar to the way humans communicate with each other.

NLP uses various techniques, such as machine learning, deep learning, and neural networks, to help computers understand the nuances of human language. NLP algorithms can be trained on large data sets of human language to learn patterns and structures such as grammar, syntax, and meanings.

For example, ChatGPT is an NLP tool designed to generate human-like responses to input text. When a user enters text, the model generates a response based on the patterns it has learned from the training data. 

Actually, NLP deals with a wide range of tasks, including language translation, speech recognition, sentiment analysis, text classification and information extraction. These tasks are usually performed on unstructured data, such as text documents, audio recordings, and social media messages.

 

How can it benefit businesses?

  • Improving customer service: NLP can be used to automate customer service tasks, such as answering customer requests or distributing queries to the appropriate departments. This can help businesses improve customer satisfaction and reduce the workload of support staff. NLP-powered chatbots and virtual assistants can be used to provide immediate responses to customer queries and provide personalized recommendations. They can also help reduce waiting time and provide 24/7 support.
  • Increasing operational efficiency: NLP-based text analytics automatically extracts information from large amounts of unstructured data, such as customer reviews, social media posts and customer service calls. This can help companies identify trends, track brand sentiment, and gain valuable insights into customer needs and preferences. NLP can automate various business processes, such as content moderation, data extraction and categorization, reducing manual work and streamlining operations.
  • Improving marketing: enables businesses to understand their customers better and deliver more personalized, targeted messaging. Keyword extraction helps identify keywords and topics relevant to an enterprise's target audience, which helps create content and marketing messages. Methods such as sentiment analysis can be used to understand customer feedback and reviews, helping companies learn how their customers perceive their brand and products. This can help justify marketing messages and improve customer interactions.
  • Improving compliance: NLP can automatically detect and flag inappropriate or offensive content in text data, which can help businesses comply with regulations and avoid legal risks. It can automatically monitor regulatory compliance by analyzing documents for relevant keywords, phrases, and patterns, which can be extremely useful for contract management, Anti-Money Laundering (AML), fraud detection, and data privacy compliance.

 

How difficult and expensive is the implementation of NLP?

The complexity and cost of NLP implementation into business processes can vary depending on several factors, including the size of the company and the specific NLP use cases. In general, it can be challenging, time-consuming, and expensive, especially if you want to develop everything from scratch. NLP requires a large amount of data to train and test models and significant computing resources.

But NLP tools can be much more accessible and cost-effective due to cloud services provided by companies such as Amazon Web Services (AWS). These services deliver easy-to-use APIs and pre-built models that can be integrated with other software applications.

 

What are NLP services offered by AWS?

One of the most famous NLP solutions from AWS is probably Alexa. This virtual assistant can understand natural language commands and perform various tasks like playing music, controlling smart home devices, providing weather updates, and answering questions. Alexa uses advanced Natural Language Processing (NLP) techniques to interpret and understand user input.

Alexa was created on top of the Amazon Lex service - a fully managed service providing a platform to build, test, and deploy chatbots and other conversational interfaces that can understand natural language input and respond with appropriate actions. Developers can create custom skills for Alexa using the Lex console, which allows them to define intents and slots, build dialogue flows, and integrate with external services.

Amazon Polly is used to converting generated responses into speech. This text-to-speech service can generate lifelike speech in multiple languages and voices, using deep learning technologies and advanced prosody models, resulting in high-quality, natural-sounding speech output.

A neural machine translation technology is used by Amazon Translate Service to provide high-quality translations. It supports 75 different languages. 

For working with large volumes of data and documents:

  • Amazon Comprehend - NLP service that can extract insights and relationships from text data. It can perform tasks like sentiment analysis, entity recognition, language detection, and keyphrase extraction.
  • Amazon Textract: a service that can extract text and data from scanned documents, forms, and tables.
  • Amazon Kendra: An enterprise search service that uses machine learning to provide natural language search capabilities across multiple data sources, like documents, FAQs, and wikis.

 

These AWS NLP services can be used individually or in combination to build applications that can analyze, understand, and generate natural language.

By integrating these services with other AWS offerings like, for example, a unified communications service - Amazon Chime, companies can create robust and cost-efficient solutions that improve customer engagement and satisfaction and provide assertive communication and collaboration tools for businesses of all sizes, from virtual meetings to contact centres to virtual assistants.

Inmost team has expertise in building solutions that combine AWS services with NLP capabilities, and we can help you create high-performance and cost-efficient solutions that drive business success. 

Don't hesitate to contact: https://inmost.pro/contact-us/

 

Why can the Response Time of your application be crucial for your business?

What is the response time?

Applications, software, and websites receive requests from users, and the time it takes to respond to their interactions can have a dramatic impact on application efficiency and user satisfaction. Response time is the time interval that begins when a user clicks a button on a web page and lasts until the server returns the complete data.

There are numerous reasons why an application may be slow to respond to the requests, such as high website traffic - too many concurrent requests, memory and compute resource leaks, slow database queries, limited network bandwidth, or even poor application logic.

It is essential to know and monitor the factors that cause the limited performance of your application. This is the first and most crucial step to reducing response times and improving overall performance.

 

Why is it so important?

If response time is essential when dealing with in-house software that ensures the productivity of work processes, then it becomes a really crucial factor in e-commerce applications. Slow web page response time can discourage visitors from accessing the site and switching to a competitor's site. This immediately deprives the company of a potential customer and makes the site less competitive. When users leave it, the bounce rate of the page increases, which negatively affects search engine rankings.

Customer satisfaction surveys have consistently shown a positive correlation between faster response times and higher customer satisfaction. According to Forrester Research, 77% of customers believe that the most important thing a company can do to provide them with good service is to value their time.

 

What is considered to be a good response time?

According to Jakob Nielsen - web usability researcher and consultant, there are three main time limitations which are determined by the abilities of human perception that should be kept in mind when optimizing the performance of websites and applications.

0.1 second: gives a feeling of immediate response - which provides the impression that the user controls the outcome-receiving process, not the computer. In his article "Need for speed", Dr Nielsen mentions that research on a wide variety of hypertext systems has shown that users need response times of less than one second when moving from one page. 

1.0 second: the user's flow of thought is not interrupted, although the delay is already noticeable. So the users know that the computer is generating a result, but they still feel that they are in control of the overall experience and are moving freely rather than waiting for the computer to do so. This level of responsiveness is essential for good navigation.

10 seconds: from 1 to 10 seconds, users feel dependent on the computer and would like it to run faster. After 10 seconds, they start focusing on other things, which makes it challenging to get their attention back in the right direction when the computer finally responds.

A 10-second delay often forces users to leave the site immediately. And even if they stay, the possibility of successfully completing the user's tasks is significantly reduced.

 

Digital giants experiments

This means a response time of more than 1 second is problematic and needs improvement. The higher the response time, the more likely users will leave your website or application. 

Experiments by major digital giants confirm that even small changes in response time can cause serious consequences.

Google found that switching from loading a page with 10 results in 0.4 seconds to a page with 30 results loading in 0.9 seconds reduced traffic and advertising revenue by 20%. Reducing the homepage of Google Maps from 100KB to 70-80KB resulted in a 10% increase in traffic in the first week and an additional 25%  in the following three weeks. 

Amazon Tests revealed similar results: every 100 ms increase in load time of Amazon.com decreased sales by 1%.

Microsoft Live Search experiments showed that when search results pages were slowed down by 1 second, the number of queries per user decreased by 1.0%, and the number of advertisement clicks per user decreased by 1.5%.

After a 2-second page slowdown: Requests per user decreased by 2.5%, and advertisement clicks per user decreased by 4.4%.

 

What Metrics are used while measuring Response Time?

Let's look at six of the most important metrics to watch and the value they provide.

Response Metrics

  • Average response time is the average time taken per every round-trip request. Average response time includes loading time for HTML, CSS, XML, images, JavaScript files, etc. Therefore, the presence of slow components in the system affects the average response time.
  • Peak Response response time helps to find potentially problematic components. This helps us to find any issues with the website or the system when a particular request is not handled correctly. 
  • The Error rate is a mathematical calculation that shows the percentage of problem requests compared to all requests. This percentage considers all HTTP status codes displaying an error on the server. It also calculates time-out requests.

Volume Metrics

  • Concurrent users measure how many virtual users are active at any given time. Although this is similar to the number of requests per second, the difference is that each concurrent user can generate a large number of requests.
  • Requests per second measures the number of requests that are sent to the server every second, including requests for HTML pages, CSS style sheets, XML documents, JavaScript files, images, and other resources.
  • Throughput measures the amount of bandwidth in kilobytes per second consumed during a test. Low throughput may indicate the need to compress resources.

 

Response time - one of the crucial business-factors

Users don't want to know the reasons behind the delays. They just realize they're getting poor service and are annoyed by it. Even a few seconds of delay is enough to create an unsatisfactory experience for the user. So, with repeated short delays, users will abandon the task and look for an opportunity to do it elsewhere. You can lose part of the sales simply because your site or application is too slow.

Leverage the efficient and reliable solution development from Inmost team, ensuring application resilience even in case of sudden and significant high loads. Remember that when choosing between several software applications, assuming that they are all equally reliable, users will always choose the fastest one.

 

LoRaWAN leading technology in the LPWA space

Last week IoT Solutions World Congress was held in Barcelona, and the booth with the most quantity of IoT devices was the stand with LoRaWAN devices. It's imposing. Let's know why LoRaWAN is so famous for IoT.

 

IoT Glossary Definition

LoRaWAN is an abbreviation for Long Range Wide Area Network. It's a type of Low Power Wide Area Network (LPWAN) that uses open-source technology and transmits over unlicensed frequency bands. LoRaWAN was designed for the Internet of Things (IoT) technology and provided a far more extended range than Wi-Fi or Bluetooth connections. It works well indoors and is especially valuable for applications in remote areas where cellular networks have poor coverage.

 

The difference between LoRa and LoRaWAN

It's not uncommon to hear LoRa and LoRaWAN used interchangeably, but they're two different things.

LoRa (Long Range) is an LPWAN protocol that defines the physical layer of a network. It's a proprietary technology owned by Semtech (a chip manufacturer) that uses Chirp Spread Spectrum to convert Radio Frequencies into bits so they can be transported through a network. LoRa is one of the technologies that make LoRaWAN possible, but it's not limited to LoRaWAN, and it's not the same thing.

LoRaWAN (Long Range Wide Area Network) is an upper-layer protocol that defines the network's communication and architecture. More specifically, it's a Medium Access Control (MAC) layer protocol with some Network Layer components. It uses LoRa, explicitly referring to the network and how data transmissions travel through it.

 

The main characteristics

LoRaWAN has two key characteristics that make the technology particularly suitable for specific IoT markets. 

Firstly, it is an LPWA technology meaning that LoRaWAN-connected devices can be battery-powered with battery lives of potentially several years. Also, LoRaWAN networks have the potential to be deployed as wide-area public networks, much as cellular networks are currently deployed today. 

Secondly, LoRaWAN operates in the licence-exempt spectrum, meaning that an end-user or network provider does not need to procure radio spectrum before deploying a network. These characteristics make for cheap and easy network deployment to provide connectivity for battery-powered sensing or actuating devices that can potentially operate for years with minimal maintenance requirements. The trade-off for this flexibility lies in LoRaWAN's limited data rates, which are much lower than today's cellular technologies but are often perfectly adequate for IoT devices. 

 

LoRaWAN Classes A, B, & C

LoRaWAN has three classes that operate simultaneously. 

Class A is purely asynchronous, which we call a pure ALOHA system. This means the end nodes don't wait for a particular time to speak to the gateway—they simply transmit whenever they need to and lie dormant until then. If you have a perfectly coordinated system over eight channels, you could fill every time slot with a message. As soon as one node completes its transmission, another starts immediately. Without any communication gaps, the theoretical maximum capacity of a pure aloha network is about 18.4% of this maximum. This capacity is due to collisions. Two nodes will collide if they transmit at the same frequency channel with the same radio settings.

Class B systems work with battery-powered nodes. Every 128 seconds, the gateway transmits a beacon. (See the time slots across the top of the diagram.) All LoRaWAN base stations simultaneously transmit beacon messages at one pulse-per-second (1PPS). Every GPS satellite in orbit transmits a message at the beginning of every second, allowing time to be synchronized worldwide. All Class B nodes are assigned a time slot within the 128-second cycle and are told when to listen. You can, for instance, tell a node to listen to every tenth-time slot, and when this comes up, it allows for a downlink message to be transmitted (see above diagram).

Class C allows nodes to listen constantly and send downlink messages anytime. This system is used primarily for AC-powered applications because it takes a lot of energy to keep a node actively running.

 

Where to use

LoRaWAN networks have been deployed as wide-area private networks, notably to support applications such as smart metering and public space lighting, including street lighting. Deployment of networks for street lighting, in particular, can unlock new opportunities for smart streets.

Some companies have ambitious plans to deploy LoRaWAN as a wide-area public network technology that is rapidly gaining momentum. In this context, it is worth calling out three companies: Everynet, Helium, and Senet. Recently, Everynet has pursued a strategy to roll out such networks, starting in Brazil and following with the USA and Indonesia. The company's networks cover more than 50% of the population of Brazil and more than 40% of the population of the USA, and Everynet will enhance this baseline coverage according to customer demand. The following priorities include several larger European countries.

Meanwhile, Helium claims to offer the largest LoRaWAN network in the world. Hotspots or access points can be deployed by any individual or business and offer coverage as part of the Helium network in return for payment, enabled and administered using distributed ledger - Blockchain - technology. Currently, the Helium network is comprised of around 850,000 LoRaWAN hotspots. Senet positions itself as a carrier-grade network provider and has a two-way roaming agreement with Helium. Senet itself, in September 2022, announced that it had expanded the build-out of its public LoRaWAN network across all five boroughs of New York City.

 

Forecast for LoRaWAN applying

According to the forecast, by 2030, there will be 6.9 billion wide-area wireless IoT connections, of which 36% will be traditional cellular technologies, while 4.4 billion will be LPWA technologies.

Utilizing the power of LoRaWAN can solve a mix of connectivity challenges for things such as sensors and metering across industries, including smart cities, fleets, automotive, agriculture and industrial.

LoRaWAN is ideally suited for deployment as a campus area network in agricultural contexts, in support of devices ranging from soil-moisture sensors to temperature sensors in greenhouses and from storage tank level monitoring to enabling remotely controlled irrigation systems. In other enterprise contexts, the technology is well-suited to monitor various assets' location and condition, enabling building automation solutions and many other applications.

One of the key scenarios includes deploying networks to support inventory management and monitoring, including stock level monitoring and warehouse management systems which can reduce the load on warehouse employees, freeing them up for other higher-skilled tasks. 

Significant benefits can be gained from monitoring chillers and refrigerators in retail, hospitality, medical and warehouse contexts. In all these cases, a simple LoRaWAN temperature sensor connected to a private network can provide regular temperature readings and help ensure that refrigeration units are maintaining correct temperatures, reducing spoilage and waste.

 

Pitfalls

LoRaWAN is fine if you want to build on carrier-owned and operated public networks. Service providers like to compete in this space, so many choices exist. And for simple applications, where you don't have a lot of nodes and don't need a lot of acknowledgement, LoRaWAN works. But if your needs are more complex, you will inevitably hit serious roadblocks. Many LoRaWAN users have not experienced those roadblocks because their networks are still relatively small. Try using LoRaWAN to operate a public network with thousands of users doing different things, and the difficulties will most certainly skyrocket.

Also, developing and deploying a system around LoRaWAN is a complex process. It is an excellent misapprehension to think that LoRaWAN "works out of the box" like some Wi-Fi or cellular modems might. You will want to be sure you understand all the architecture and have a good grasp of how the system works before you decide it's the best route for you.

 

Alternatives 

Symphony Link is an alternative LoRa protocol stack developed by Link Labs. To address the limitations of LoRaWAN and provide the advanced functionality that most organizations need, we built our software on top of Semtech's chips.

Let's discuss it next time.

 

Cellr Vuforia

 

Briefly about the application

Inmost and CELLR aspired to create a proprietary application for producers and consumers of wines where they could easily add new wines without relying on a competitor's licensed product. Consequently, the CELLR app was developed as a mobile and web version. The application was created for producers and consumers of wine who would participate as members of a more extensive wine community.

Inmost has developed both a mobile and web version of the CELLR app.

It allows you to get accurate information about the desired wine: Producer, Vintage, Country of release, Varietal, market pricing, and consumer reviews.

The app bonds a community where members can share feedback and trade. Inmost and CELLR have developed a trading system where users can make transactions conveniently and safely with wine authentication.

The user can create his collection, store it in a user's cellar, and keep their records, noting when the wine was bought or drunk.

You can also import your inventory from other services or export it as a backup. Among other features, we have implemented convenient filtering and searching by Producer, Varietal, Country, Region, Appellation, Vintage, and Price. 

One of the advantageous features of the application is the search for wine by label photo.

Check this link for a more detailed description: https://inmost.pro/cases/peer-to-peer-trading-platform-that-directly-connects-wine-enthusiasts-and-cellar-owners.

 

Wine search by label photo

The user needs to press the search by photo button, point the camera at the label, and take a photo. The application will find the wine and show complete information about it. If there are multiple matching results, they will be sorted in descending order of relevance.

During the development of this functionality, we considered two possible concepts:

  • Concept 1. We could recognize the text in the photo of the label and perform a full-text search in a PostgreSQL database.
  • Concept 2. We could use one of the Machine Learning technologies like the Vuforia Engine.

 

How we evaluated recognition algorithms

To assess the quality of the recognition algorithm, we used a statistical method (check https://en.wikipedia.org/wiki/Type_I_and_type_II_errors).

We created a test set of 73 photos from the most popular manufacturers. We included an equal number of photos of labels with no text, medium, and a lot of text in the test set.

Next, we made up a JSON file that contained metadata for each label and a link to a photo in an AWS S3 bucket

For automatic testing, we needed a script that would iterate through all the photos in a folder and make requests to our Backend server. The server will use two search algorithms. In the first case, PostgreSQL full-text search. In the second case, it will use the Vuforia Engine. After each server response, the script will complete the JSON file with search results. At the end of the script, the script will count the number of correct and incorrect search results for the label photo.

 

 

The server can return one of three search results:

  • the server replied that the wine was found, and its result matched the correct answer;
  • the error of the first type: the server replied that the wine was not found, but it was in our database;
  • the error of the second type: the server replied that the wine was found but made a mistake and specified another wine.

 

With the help of the script, we could test each of the two algorithms in just a few minutes and get the number of correct answers and the number of errors of the first and second types for each. By changing algorithm configuration parameters, such as weights (explanation below) for full-text search and photo quality for Vuforia Engine, we evaluated the algorithm again and again. In the end, we found the optimal configuration with the maximum number of correct answers and the minimum number of errors.

 

PostgreSQL Full Text Search Algorithm

We implemented this functionality on the server to test the first concept. The user takes a photo of the label in the mobile application and sends a request to the server. Next, the server will convert the image to Base64 encoding and access the Google Cloud Vision service (Cloud Vision API) to extract text from the photo. The response comes in the form of a string (line) of text that contains all the text that could be recognized on the label. The server uses this text for full-text searches in the PostgreSQL database. See details about full-text searches here: 

https://www.postgresql.org/docs/current/textsearch-intro.html.

Let's imagine that information about wine is contained in a table, and each column contains the name of the wine, the name of the manufacturer, the grape variety, the year of release, the country, the region, and the capacity of the bottle. For full-text search in a PostgreSQL database, we will create a text vector and assign a weight to each column. Possible values of the weight coefficients are D=0.1, C=0.2, B=0.4, A=1.

Empirically, we have selected the best weights for our sample: wine_producer=1.0; wine_name=1.0; country, region, subregion=0.4; variable=0.2; bottle_size=0.1.

After evaluating this algorithm, we obtained the following statistics:

Table A.1 - Number of errors using PostgreSQL full-text search

Item Total
Number of images 73
Correct results 51
Type 1 errors 16
Type 2 errors 6

 

If this algorithm is slightly complicated, it can be crucially improved. We noticed that the critical search parameter is the wine producer. If the manufacturer is found correctly, then the search by other parameters can be significantly narrowed, and the number of manufacturers is small. If the manufacturer were determined incorrectly, the search by other parameters would give a deliberately false result, which we can see by the number of type 2 errors.

The improved algorithm should search for two iterations. First, find the manufacturer, and only in the case when it was found, correctly perform the second iteration of the search among the list of wines from this manufacturer.

However, we did not complicate the search algorithm and decided to test the second concept using the Vuforia Engine machine learning model. 

 

Algorithm using Vuforia Engine

Information about Vuforia Engine you can view here: https://library.vuforia.com/objects/image-targets.

We created a Vuforia account and used a ready-made test set of photos. Vuforia allows you to easily integrate your engine into the Unity platform using the SDK (check https://library.vuforia.com/getting-started/getting-started-vuforia-engine-unity).

We made up a Unity project, connected the SDK, and used the webcam. We loaded target photos of labels using varying quality into Vuforia Engine and showed actual bottles with labels in front of a webcam. We found that the target photo should have good contrast and should be taken in good lighting. The same photograph of the label in different quality gives very different results.

Vuforia assigns a conditional rating to the photo (in points from 0 to 5): 

 

  • Pictures with a rating of 2 points and below could be recognized better. It takes a long time to show the label to the camera before it can be recognized. Photos with this rating are not suitable for the application;
  • Photos with a rating of 3 points were recognized well;
  • Photos with a rating of 4 and 5 points were recognized ideally as soon as the label hit the camera lens;
  • Contrast photos performed much better, and monochrome photos performed significantly worse;
  • Recognition did not work well if there were spots or dirt on the label, even small ones;
  • If the photo is bent in front of the camera and rotated at different angles, it is recognized well regardless.

 

We used the Vuforia REST API for automated testing.

(Check https://library.vuforia.com/articles/Solution/How-To-Use-the-Vuforia-Web-Services-API.html).

You need to send a POST request to the Vuforia service and add a Base64-encoded photo to the request body. Vuforia would return an answer if it was found among the target photos. 

 

 

Here are some of the examples we’ve tested:

 

1.jpg => found: {'target_timestamp': 1614817446, 'name': 'Constant_Diamond_Mountain_Vineyard_Cabernet_Franc'}

2.jpg => found: {'target_timestamp': 1614817446, 'name': 'Constant_Diamond_Mountain_Vineyard_Cabernet_Franc}

3.jpg => found: {'target_timestamp': 1614817446, 'name': 'Constant_Diamond_Mountain_Vineyard_Cabernet_Franc}

4.jpg => found: {'target_timestamp': 1614817446, 'name': 'Constant_Diamond_Mountain_Vineyard_Cabernet_Franc}

 

Photo of a wine label without text:

 

1.jpg => found: {'target_timestamp': 1615511800, 'name': 'Orin_Swift_Blank_Stare}

2.jpg => not found

3.jpg => found: {'target_timestamp': 1615511800, 'name': 'Orin_Swift_Blank_Stare}

4.jpg => found: {'target_timestamp': 1615511800, 'name': 'Orin_Swift_Blank_Stare'}

5.jpg => found: {'target_timestamp': 1615511800, 'name': 'Orin_Swift_Blank_Stare'}

5.png => found: {'target_timestamp': 1615511800, 'name': 'Orin_Swift_Blank_Stare'}

6.jpg => found: {'target_timestamp': 1615511800, 'name': 'Orin_Swift_Blank_Stare'}

8.jpg => found: {'target_timestamp': 1615511800, 'name': 'Orin_Swift_Blank_Stare'}

 

1.jpg => not found

2.jpg => found: {'target_timestamp': 1615514547, 'name': 'Morlet_Family_Vineyards'}

3.jpg => found: {'target_timestamp': 1615514547, 'name': 'Morlet_Family_Vineyards'}

4.jpg => found: {'target_timestamp': 1615514547, 'name': 'Morlet_Family_Vineyards'}

5.jpg => found: {'target_timestamp': 1615514547, 'name': 'Morlet_Family_Vineyards'}

6.jpg => found: {'target_timestamp': 1615514547, 'name': 'Morlet_Family_Vineyards'}

7.jpg => found: {'target_timestamp': 1615514547, 'name': 'Morlet_Family_Vineyards'}

8.jpg => found: {'target_timestamp': 1615514547, 'name': 'Morlet_Family_Vineyards'}

 

1.jpg => found: {'target_timestamp': 1615516540, 'name': 'Chateau_Lascombes_Chevalier_de_lascombes'}

2.jpg => found: {'target_timestamp': 1615516540, 'name': 'Chateau_Lascombes_Chevalier_de_lascombes'}

3.jpg => found: {'target_timestamp': 1615516540, 'name': 'Chateau_Lascombes_Chevalier_de_lascombes'}

4.jpg => found: {'target_timestamp': 1615516540, 'name': 'Chateau_Lascombes_Chevalier_de_lascombes'}

5.jpg => found: {'target_timestamp': 1615516540, 'name': 'Chateau_Lascombes_Chevalier_de_lascombes'}

6.jpg => found: {'target_timestamp': 1615516540, 'name': 'Chateau_Lascombes_Chevalier_de_lascombes'}

7.jpg => found: {'target_timestamp': 1615516540, 'name': 'Chateau_Lascombes_Chevalier_de_lascombes'}

8.JPG => found: {'target_timestamp': 1615516540, 'name': 'Chateau_Lascombes_Chevalier_de_lascombes'}

 

We’ve evaluated the quality of the search algorithm for a test set of photos:

Table A.2 - Number of errors using Vuforia Engine

Item Total
Number of images 73
Correct results 61
Type 1 errors 12
Type 2 errors 0

 

We got fewer errors compared to PostgreSQL full-text searches, and they were all type 1 errors, which means we need to improve the quality of the photos. We need photos that Vuforia Engine will assign a rating of 4 to 5. It won’t be easy to find 1-2 million photos of labels in good quality. In one of the following sections, we will describe how we solved this problem.

 

Integrating Vuforia Engine into a Mobile App

At the development stage, we only opened this functionality to beta users. Now our recognition algorithm looks like this:

 

 

We need to have a sufficient database of label photos before many of the search queries are found. In addition, the user can send photos that do not belong to the labels, shooting foreign objects on the camera. When a user makes a photo search request, the wine may or may not be found. We have created two buckets of AWS S3, where we will store separately found and not found photos. We also created two tables: vuforia_noreco_images and vuforia_reco_images. If a label photo is found, we store it in the reco bucket and add an entry to the vuforia_reco_images table. If the photo is not found, we save it to the noreco folder and add an entry to the vuforia_noreco_images table.

 

 

 

Before enabling this search functionality in the Production version of the application, we need to fill our database with label photos. We need about 1-2 million decent-quality photos. This was the next task before integrating it into the mobile app. 

 

Obtaining a database of label photos

To obtain our database of label photos, we contacted the COLA service (check https://ttbonline.gov/colasonline/publicSearchColasBasic.do).

 

 

It is designed to certify products and contains over 2 million wine certificates. Each certificate contains the metadata we need: name, manufacturer, year of issue, the volume of the bottle, and most importantly, a good-quality photo of the label.

We have written a separate service that will download these certificates, parse their content, save the label photos to an AWS S3 bucket, and save the metadata to our database. The COLA service has its classification of product codes. The numbers we are interested in are in the range of 80-89C.

 

 

We created a folder in the AWS S3 bucket for each wine code.

 

 

After processing each certificate, we will create a separate folder with the name of the wine, in which we will save the JSON file with metadata and JPEG photos of the labels. We will need to adapt this data to our wine model before adding it to our database and then upload the photos to Vuforia as Target Images.

 

 

To avoid loading the COLA service with requests, we used the node-cron library (check https://www.npmjs.com/package/node-cron). With its help, our service started once an hour and made 300 requests. Thus, we uploaded 7200 new certificates per day with metadata and photos of labels without making a big load on the COLA service.

We launched two AWS EC2 instances, and each downloaded and parsed certificates. Since the COLA service products are divided by category codes, we quickly divided this work between two servers, dividing the categories equally.

Consequently, each of the two AWS EC2 instances downloaded 300 certificates per hour and parsed them after that. It took us 4.5 months to download all the certificates we needed with label photos - about 2.1 million.

 

Enterprise Metaverse – the new way to do business

General business engagement

Metaverse was among the top trends of 2022 and continues to gain popularity beyond the gaming community. It will undoubtedly remain a priority for technical innovation and investment in 2023. Technologies such as AR, VR, IoT, 5G, blockchain, and cloud computing are fueling the next industrial revolution that will dramatically impact the future of business.

Numerous companies in industries ranging from education and healthcare to retail and manufacturing are already leveraging enterprise blockchain, cryptocurrencies, artificial intelligence, virtual reality, and NFTs. The technologies associated with the Metaverse have started bringing significant benefits and are gaining much attention in the business world.

 

Metaverse and Enterprise Metaverse definition

The definition of the Metaverse today is the gradual merging of the digital and physical worlds, the environment where the distinctions between our digital avatars and our physical selves are blurred. It is a world where we are surrounded by information and smart devices, interacting with data for work, education, entertainment, and more. It is a dynamic, open, and interoperable space.

An enterprise metaverse, by definition, is a metaverse that provides business growth opportunities for an enterprise. Its main goal is to connect people in a single virtual workspace, and digital twins are its foundation. Digital twins can be used to create a detailed virtual representation of any physical object, from specific items or assets to complex industrial environments, which can be extensive, such as roads, rail lines, factories, buildings, and warehouses. Once they are modeled, sensors and IoT connections can be used to bring them to life and synchronize them with the real world. 

 

What are the advantages for business?

Global reach

The most crucial advantage is a global reach. The Internet has helped companies overcome the long journey to customers and new markets that are geographically distant. The Metaverse can quickly and efficiently connect people from all over the world. In just a few minutes, you can arrange a business meeting with the digital avatar of your business partner from another continent.

New ways of interaction

Moreover, customers and businesses can connect, communicate and participate in a new way that meets modern challenges. Not long ago, for example, pandemic restrictions severely affected the events industry when concerts, sports games, festivals, conferences, and other events were forbidden for a long time. Mass events in the virtual world do not depend on such restrictions - concerts, fashion shows, and conferences are already actively held in the Metaverse.

You can already find agencies offering the organization of weddings and parties in a virtual environment.

The MICE industry is expected to continue expanding its presence in the Metaverse. Soon, many hotels will sell virtual meeting rooms and standard services, generating new revenue.

 

Digital workspace and employees training

Even though some companies are trying to bring their employees back to the office after the pandemic, the overall trend towards remote working is only increasing.

Companies don't need to rent an office and pay for its maintenance, and employees cannot waste time and money on getting to the workplace and back home - these are just the most obvious benefits of remote work.

With the Metaverse, the office comes to you wherever you are, and you can set up your virtual workspace the way you like it. The Metaverse enables employees to be constructive and productive, even when they're thousands of miles away from their workplace.

In the Metaverse, you can meet with colleagues, have group discussions, give presentations, and communicate through personalized avatars that make you forget that you are not in a real office.

Moreover, the Metaverse is an excellent environment for employee training. Any training materials and content can be easily created here, as well as simulations of dangerous situations that allow professionals such as firefighters or police officers to practice skills without risking their lives.

 

AI assistant

A beneficial digital tool that can perform basic calculations, remind you of appointments, translate texts from different languages, transcribe your speech, find needed information and do everything a regular secretary or personal assistant can do. The digital assistant is available 24/7 and never gets tired or takes a vacation or sick leave.

And although such assistants already exist in the real world, the metaverse environment greatly expands their capabilities and allows you to create any digital appearance for them.

 

Remarkable marketing opportunities

Metaverse allows companies to create their world representing their brand in a way that no video, traditional advertising, words, or images can do. Each world can be unique and create a comprehensive experience for the customer. Customers can interact with a digital copy of your product before they buy it, for example, by checking how the piece of furniture they want to purchase fits into the interior of their house.

Hotels can offer virtual tours that allow guests to experience their services, see the breathtaking sunset view from the terrace of a suite and even hear the sound of the ocean waves.

With the metaverse technology, you can conduct activities that are difficult or impossible to implement in reality, such as organizing a party on a yacht for potential customers from all over the world, where you can impressively present your new product or service.

Before launching it into production, you can test a new design or product on your target audience in a virtual space.

Such tools are priceless in terms of competition, gaining customer loyalty, and saving your time and money.

 

How soon will the Metaverse be fully launched?

Although the Metaverse is not fully implemented yet, and in 2023 the process is unlikely to be completed, it will have a profound and lasting impact on industries and businesses before it reaches its full potential. 

With the start of 2023, virtual reality, augmented reality, and digital technologies will be more intensively integrated into our lives. They will firmly take their place in many businesses and industries. Understanding how things work is vital for successfully embracing new opportunities the Metaverse offers.

 

 

How smart are Smart Contracts?

What are Smart Contracts? 

A Smart Contract - sounds promising. When we meet this term in the context of blockchain technology, it seems we deal with something extra complex and futuristic. 

Is it that complicated? Let's find out.

Smart contracts are a kind of computer program or algorithm of certain behavior integrated into the blockchain network. If the predetermined conditions, specified by the code, are followed, it triggers the performing of a particular action or sequence of actions. For example, if you put $5 in a ticket machine and press the "day ticket" button, it will print and dispense a certain - predetermined type of ticket. 

As in a paper contract, the fulfillment of the conditions is mandatory. Only in this case, the transaction will be performed, and the users will receive the conditioned result. Once the algorithm is successfully completed and the transaction is correct, smart contracts become a part of the ledger in a blockchain network.

So, we can say that the Smart Contract is a self-executing program with "if... then" logic based on blockchain. Easy enough.

However, one smart contract can include several different conditions, and one application can execute multiple smart contracts to perform a consistent set of related processes. There can be as many conditions that are necessary to execute the task successfully.

Currently, blockchain technologies supporting smart contracts are actively used for complex computing tasks that involve artificial intelligence (AI) and machine learning (ML).

This combination of technologies could be the basis for AI-enabled smart contracts and would be able to create very complex, more responsive enterprise-level smart contracts and dApps with the potential to seriously expand the capabilities of this technology, unlike simple smart contracts which can be developed manually.

 

Most popular smart contract development platforms

Ethereum was the first platform in the world where smart contracts were used and even today it is extremely popular among developers. Ethereum now is based on the PoS consensus algorithm. Its smart contracts are written in Ethereum's own smart contract programming language - Solidity. The execution environment for smart contracts is called the Ethereum Virtual Machine (EVM) and is interoperable with other blockchains such as Solana and Avalanche, allowing developers to move their smart contracts to other platforms.

Hyperledger Fabric - is an IBM enterprise-focused private blockchain platform that also supports smart contracts, or what Fabric calls "chaincode". The platform is capable of processing up to 20,000 transactions per second with no transaction fee. Hyperledger Fabric runs the code on top of Docker containers which allows a reduction in cost for smart contract applications. Fabric uses traditional high-level programming languages such as Java and Go and leverages Crash Fault Tolerant (CFT) consensus algorithm. For easier and more efficient smart contract development, it offers a set of tools, including Hyperledger Composer. The modular and versatile design of Hyperledger Fabric meets a wide range of industry use cases.

Like other next-generation smart contract platforms, Solana aims to solve scalability problems. It has a record-high speed of 65,000 transactions per second. The main reason for such high performance is that Solana uses an innovative combination of Proof of History (PoH) and Proof of Stake (PoS) consensus algorithms. A transaction fee is rather low - just $0.00025. 

Smart contracts built on Solana can be written in Rust, C++, and C. In addition, the platform is compatible with EVM, allowing developers to run Ethereum-based smart contracts in the Solana ecosystem.

 

Where smart contracts can be used now?

Smart contracts eliminate unnecessary paperwork and the cost of expensive intermediaries that are an essential part of traditional contracts, transactions, and exchanges while maintaining the transparency and traceability of the blockchain and reducing counterparty risks. These features make smart contracts a valuable tool for various use cases.

Smart contracts are an essential component of many DeFi (Decentralized Finance) applications and have already significantly influenced their evolution. DeFi applications (dApps) provide services that are alternative to the banking and finance industry - such as trading, lending, borrowing, exchange, and a range of other financial services - as well as entirely new types of products and decentralized business models that can deliver significant value to users.

Blockchain technology in the gaming industry could allow players to get more benefits from in-game purchases and asset accumulation. In games, blockchain technology is usually represented by NFTs based on smart contracts. They help, for example, to avoid developers’ manipulations to induce users to make repeated purchases. The user can be sure that the previously purchased artifact will not lose its properties or in-game value after the next game update. The immutable nature of smart contracts will not permit such mischief.

Smart contracts can protect digital artists' property rights by providing transparent royalty rules. For example, the digital environment is a great platform for musicians, especially singles or beginners, to introduce their work to audiences. But low traceability, miserable payouts due to a large number of intermediaries, and long non-transparent royalty payment processes create a lot of problems. With smart contracts, musicians could be paid a certain sum every time a user pressed the play button on one of their tracks, enabling the royalty payout process to be completed in seconds instead of months, eliminating the need for unnecessary third parties and building a direct seller-consumer interaction with their fans. 

In addition to the examples mentioned above, smart contracts are already used in real estate, insurance, healthcare and clinical trials, retail, supply chain management, and many other areas.

 

Advantages offered by smart contracts

So let’s resume the benefits of smart contracts deployment:

  • Transparency - the  contract is the code, which is publicly available to view for everyone. The transparency,  traceability, and immutability of data help to create a better sense of trust;
  • Security - blockchain-based smart contracts offer a very secure and transparent way to document and automate business processes. To enhance the security of storage transactions on the blockchain, data encryption can also be used;
  • Cost-efficiency - processes enabled by smart contracts require less human intervention and no third parties, and therefore it reduces costs;
  • Speed - the absence of intermediaries reduces economic costs as well as time. Immediate automated execution as soon as the conditions are met saves time compared to manual and third-party contracts;

 

Very soon, smart contracts will become even more complex and smarter, and much more widespread as people become more confident and trusting of blockchain technology.

Inmost team is always here to help you leverage all the benefits of smart contracts. We have chosen Hyperledger Fabric platform to provide the highest level of data protection, reduce costs and make customers' business processes smarter. 

 

Does it really Matter?

The Matter is the communication standard for Smart Home.

 

Background

Initially, it was named Project Connected Home over IP (CHIP).  

Announced on 18 December 2019, Chip aims to reduce fragmentation across different vendors, and achieve interoperability among smart home devices and Internet of things (IoT) platforms from various providers. 

In May 2021, the Zigbee Alliance changed its name to the Connectivity Standards Alliance and rebranded Project Chip to Matter. 

It had been promoted past several years due to considerable ecosystem players, including Apple, Google, Samsung, IKEA, Signify, and more.

 

What іs the reason of finding a single standard?  

The reason is a mishmash of incompatible brands and devices. Hubs, communication protocols, and smart assistants operate only within their unique ecosystems. This "walled garden" limitation forces consumers to surround themselves with devices that work only within a singular ecosystem or face compatibility issues.

 

IP BASED

The standard is based on Internet Protocol (IP) and works through one or several compatible border routers, which avoids using multiple proprietary hubs. 

Matter makes it easier for device manufacturers to build devices that are compatible with smart home and voice services such as Amazon’s Alexa, Apple’s Siri, Google’s Assistant, and others. The first specification release of the Matter protocol will run on Wi-Fi and Thread network layers and use Bluetooth Low Energy for commissioning.

 

Connectivity

Most smart home brands have promised support for Matter. The list includes Amazon, Apple, Aqara (Lumi), Arlo Technologies, Belkin Wemo, Comcast, Eve Systems, Ikea, GE Lighting, Google, Infineon, Leedarson, LG Electronics, Mui Lab, Nanoleaf, Nordic Semiconductor, NXP Semiconductors, Philips Hue, Qorvo, Samsung SmartThings, Schlage (Allegion), Sengled, Texas Instruments, Tuya Smart, Universal Electronics, and Veea.

Samsung has also been active in bringing Matter to life. They launched Matter functionality via its SmartThings hubs and Android app. Matter devices are managed by one app instead of multiple apps from different device manufacturers.

Philips Hue announced that its Hue Bridge, a smart lighting hub, is Matter certified. The company has promised to make all but two of its new and existing smart lights and accessories compatible with Matter via a software update to the Hue Bridge in the first quarter of 2023. The two exceptions are the Hue Play HDMI sync box and the dial of the Hue Tap Dial Switch, both are not supported by the current version of Matter.

 

What CSA ( Connectivity Standards alliance) brings:

For consumers:

  • Simple process of selecting smart home devices;
  • No need to worry about compatibility even if devices are from disparate ecosystems;
  • More choice and a much more comprehensive selection to build a perfect smart home.

For retailers:

  • No need to seek out products from only individual ecosystems;
  • As a result, more potential customers and more profit.

For manufacturers:

  • Matter promises more innovation and less time to market;
  • The open-source nature of the standard's internet protocol focuses on streamlining the development of products. As a result, more compatible devices.

 

Forecast for Matter

Matter is still in development. It is also difficult to speculate on the impact of the standard in the future.

Therefore for now all above mentioned can be considered as predictions.

However, as such Big Fishes as Apple, Google, and Amazon are in the game, the probability that it can come true is rather high.

 

Why do companies choose Private Blockchain?

Trust, security, accountability, and transparency have essentially become synonymous to blockchain these days. Especially when it comes to private blockchain, which, as we discussed in a previous article (https://inmost.pro/blog/private-blockchain/), allows you to create versatile business applications with high scalability in a trusted environment, though sacrificing decentralization.

Most companies that have already implemented blockchain technology do not consider this fact to be a disadvantage. On the contrary, allowing only authorized users access the network provides the highest level of security while taking full advantage of all the other benefits of blockchain. Private blockchains are for the businesses that want more privacy and more control over data. 

Among the companies that are already actively taking advantage of the blockchain integration are Shell, Walmart, Mastercard, De Beers, Oracle, and many others.

Let's explore some use cases for private blockchain:

 

E-commerce

As the world of e-commerce is growing rapidly, numerous parties are involved in the operational processes, including consumers and companies buying goods or services, suppliers and delivery networks, and finally e-commerce platforms. 

As a rule, participants in this complex ecosystem are located in different countries and jurisdictions and transact in different national currencies, which lead to a truly complicated transaction process. Using third-party services usually involves problems such as high costs and fees, transaction duration taking up to several days, non-transparent payment processes, and unpredictable exchange rates. 

At the same time, customers demand that purchases be processed quickly and without hidden fees. They want to rely on the security of their personal data and, of course, have a guarantee of the brand authenticity.

The integration of private blockchain technology can solve all the above problems for both e-commerce platforms and consumers. Blockchain allows companies to track products through the entire logistics chain, from manufacturing to delivery. It provides a secure, tamper-proof method of recording transactions and allows businesses to verify the identity of customers and suppliers to prevent fraudulent activity and counterfeit goods. Blockchain eliminates the need for intermediaries by providing real-time transaction speed, reducing costs for third-party services and real-time exchange rates. 

 

Internet of Things

The tremendous growth in the use of IoT technology has a snowball effect these days. Smart devices, smart homes and, subsequently, smart cities have already become a reality. 

The IoT infrastructure consists of sophisticated chips and sensors embedded in physical objects and transmitting data to the network. This network provides data transactions on multiple devices operated and owned by different organizations, making it difficult to determine the source of data leaks in case of a cyber attack. 

Moreover, the IoT generates immense volumes of data with multiple interested parties involved, and the ownership of that data is not always obvious.

Blockchain technology can solve these problems by making IoT devices more safe and efficient. Private blockchain can provide much faster data processing among billions of connected devices, as well as create an additional layer of security for IoT networks. Blockchain enables a much higher level of encryption, making it virtually impossible to overwrite existing data records and due to smart contracts sensors can transmit data without any need for a trusted third party.

 

Healthcare 

Sensitive medical records demand the highest level of protection, as their unauthorized usage leads to significant risks to fundamental patients rights. Improving data processing and storage processes, as well as optimizing workflows, is a top priority for the healthcare industry.

In 2016, Johns Hopkins University published a study showing that the third leading cause of death in the United States was medical errors caused by uncoordinated care, such as planned procedures that are not performed as intended or errors in patient records. Blockchain-enabled solutions can help enhance trust in sharing data while eliminating problems with data transparency and interoperability. 

For example, a blockchain-based electronic record system that acts as a comprehensive single view of the patient's medical history, in which every doctor's note, prescription, or test result cannot be falsified or altered. Such a complete, single source of medical record information creates a better experience for patients and healthcare providers and helps clinics significantly reduce the legal and operational costs. 

Private blockchain technology also meets high security standards in terms of data privacy, sensitivity and the control of access.

 

Insurance

The scope of insurance services and things that can be insured continues to expand and adapt to the current reality - for example, recently the number of insurance services related to the pandemic has increased significantly. You can insure not only a house or a car, but also any piece of furniture, a beloved pet, there are a large number of exotic cases, such as insurance against unrequited love. Meanwhile, insurance companies face constantly growing compliance, intense competition, fraud allegations and third-party fees.

However, plenty of them have already adopted blockchain technology to overcome these challenges. But the specific nature and sensitivity of the data, makes the use of public blockchain inappropriate here. Private blockchain, with its limited access to the network, smart contracts, immutable and trustworthy record and transactional power, is a perfect tool for streamlining processes, efficiently handling claims, automating risk modeling and audits.

Additionally, a private blockchain can significantly improve the approach to KYS (know-your-customer ) and AML (anti money laundering) compliance by creating a single blockchain-based database, allowing partner companies to use it to facilitate customer identification. 

In addition, blockchain can also streamline regulatory compliance and significantly reduce compliance costs. Insurers would not need to manually submit compliance reports, as regulators would be able to access all relevant information on the blockchain in real time.

 

Benefits of private blockchain

Therefore, taking a closer look at each of these industries, makes it is easy to identify the following common benefits that they receive from the implementation of private blockchain technology:

  • Higher level of data storage and transmission security: ability to monitor and control access and any changes;
  • Transparency: it can guarantee the quality of goods and serviсes and immutability of all transactions and terms of deals;
  • Cost-efficiency: smart contracts that clearly define all responsibilities of the parties and the conditions for transaction validation completely eliminate the need for the expensive intermediary services;
  • Higher transaction processing speed: a private blockchain uses as many resources as necessary to process transactions, which allows for faster and more efficient transaction speeds.

 

Of course, any company from any industry would only benefit from such advantages. This means that the list of use cases for private blockchain can be much wider. In the long term, blockchain will merge the currently separate ecosystems into a fundamentally new business environment with different customer-centric approaches and products.

And the question every entrepreneur should be thinking about now: Is my company ready for such a transformation?

 

Everything you want to know about Z-Wave

When it comes to wireless technology, you’ve probably known all about Bluetooth and Wi-Fi, but what about Z-Wave?

History of Z-Wave

Z-wave was first founded by the two Danish engineers of a start-up company named, Zensys. Their actual motive was to build something to automate homes, but this later turned out to be a protocol implemented by many companies all over the world.

Later in 2008, it was acquired by Sigma Designs. After seeing the potential of this technology, many other companies also joined the alliance, which is formerly known as Z-Wave Alliance.

What is Z Wave ?

Z-Wave is the leading wireless technology behind many of the secure, trusted brands that are working to make everyone’s home smarter and safer. This technology is used to power sensors, modules, plugs, remotes, and many more smart devices. With Z-Wave, smart home products can communicate with each other no matter what brand or platform they are built on using a central smart hub.

Z-Wave briefly

  • Wireless tech designed for home automation;
  • Popular alternative to Zigbee, Matter, Wi-Fi & Bluetooth;
  • Uses European (868 Mhz), North American (908 Mhz), India (865.2 Mhz), China (868.4 Mhz);
  • Standardized wireless protocol, devices from various manufacturers can talk to each other;
  • Uses mesh networking, providing a self-healing connection;
  • Long-range, especially combined with the mesh network;
  • Extra layer of security with encryption;
  • Low power usage & long battery life;
  • Premium technology at a premium price;
  • Requires a Z-Wave hub.

 

Communication protocol

Z-Wave is a close standard protocol used on mesh networks for wireless communication between intelligent devices at home, offices, and other places. 

This protocol supports communication between devices in a closed network. This means that one cannot access the governing code of Z-Wave publicly. It prevents the code from being altered by anyone. It also implies that every Z-Wave device has a unique ID that gives it access to any Z-Wave remote. This closed structure is the core of the Z-Wave protocol as it assures effective interoperability and security.

The Z-Wave protocol uses a radio-wave frequency or signal communication between appliances. Specifically, the protocol supports communication with at most 232 devices using 908.2MHz frequency. Communication between devices can be successful within 50m of distance. These features make Z-Wave a compelling option for Internet of Things (IoT) home automation applications. Meanwhile, for hospitals, malls, offices and any other buildings with large areas, it is better to use Zigbee.

Security

Security plays a vital role in determining what network structure to use.

Z-Wave uses the same protocol as Zigbee - the AES128 standard of encryption for information security and has made it a mandatory benchmark for certification. AES128 is a trusted security standard that online banks and government agencies use. However, the Z-Wave protocol has an extra layer of security. Security 2 (S2) layer is also tagged as mandatory for every device that needs Z-Wave certification. This layer protects smart devices from being used in a DDOS attack.

Z-Wave Plus

So what’s the difference between the Plus and non-Plus versions of Z-Wave? The addition of ‘Plus’ means the device contains a newer generation of the technology - 500 series chip that takes advantage of the Z-Wave hardware platform, also known as Next Gen or Gen5. Z-Wave Plus certified products feature a high level of compatibility and security that enhance your experience with extended features, faster and easier installation and setup. Z-Wave Plus products have longer battery life, operate faster, an increased wireless range and improved noise immunity. The Plus range can also pair with each other with an extra layer of security, making it even harder for people to snoop in on your sensors and switches. 

The regular Z-Wave devices and the Plus ones can seamlessly work together, so you never have to worry about that.

What is Z-Wave compatible with

Samsung SmartThings, Fibaro smart sensors, GE Appliances, LG SmartThinq, and many others. It cannot connect a massive number of devices altogether. The number of devices that can be connected is limited and the hops as well. The Z-Wave protocols and their devices are much slower when compared with the Zigbee protocol and its devices.

Summarizing

Z-Wave is a powerful, energy-efficient and premium smart home technology. It has significant advantages over Bluetooth and it offers better battery life than Wi-Fi-based devices.

At the same time, Z-Wave is not a one-size-fits-all solution. It’s low-power, and thus not suitable for high-bandwidth appliances like wireless speakers. At the moment, the technology leads the market when it comes to sensors, whereas smart lights are more likely to support Zigbee or Wi-Fi, or the upcoming Matter technology (we will speak about Matter next time). It is ideal for users with basic understanding of technology.

 

Five use cases of NFT beyond digital art with great potential

The most common use cases

The popularity of NFTs is usually associated with digital art. Non-fungible tokens have eliminated boundaries for creators, even those who are just starting their journey in digital art, allowing them to exhibit and sell their work all over the world. Any work of art, even the most primitive, unusual, and ugly, can find its admirers and be sold for impressive sums of money.

NFT avatars are gaining more popularity on social networks. Twitter, Reddit, Facebook and Instagram have already begun rolling out NFT technologies. 

The recent global socio-political situation caused disappointment in traditional and centralized social networks due to the lack of user data protection, government control, spread of propaganda and censorship. Decentralized social networks are increasingly in demand, and NFTs will play an important role in their development. 

However, is the use of NFT technology only limited to digital art and social media? Of course not.

Just like the use of blockchain is not limited solely to the crypto industry, NFT has stepped outside the common stereotypes and is now adopted by the progressive companies to transfer their business processes into a new reality.

 

Which industries can benefit from the integration of NFT technology?

Based on blockchain and smart contracts that provide secure data storage, allow terms to be clearly defined and agreements to be confirmed without intermediaries, NFTs can be applied in a broad range of areas. We'll look at just a few of them:

 

Fashion

In fact, the fashion industry was one of the first to embrace the potential of NFT. Fashion brands and designers quickly realized that the metaverse is a great opportunity to create, promote and sell digital outfits to the millions and potentially billions of people who want to style their digital avatars in line with the latest fashion trends.

Fashion NFTs can be offered in the form of virtual clothing and accessories that consumers buy and wear in the metaverse, or as digital copies of physical creations.

There are already design studios producing virtual 3D clothing. They have much more options than real clothing manufacturers, and create the most incredible and futuristic models. Sometimes the price of such clothes significantly exceeds the cost of real outfits from well-known brands due to their uniqueness and proof of ownership provided by NFT technology.

The fashion industry's intention to move toward digital transformation is confirmed by the fact that Decentraland hosted Metaverse Fashion Week in March 2022. The NFT fashion show featured more than 70 brands, including Tommy Hilfiger, Dolce & Gabbana and Karl Lagerfeld, who participated in branded catwalks showcasing the results of their collaboration with digital designers.

 

Gaming

More and more games allow players to create and customize their game characters. Buy unique weapons, artifacts, clothing, accessories and other game-related items. Recently, there is a growing trend among NFT players - to own their characters. You can create your in-game avatar and even export it to other games of the ecosystem or sell it on the NFT marketplace.

Purchasing in traditional games is a one-time, non-transferable investment limited to a single gaming environment. Using NFT, on the other hand, transfers ownership of in-game assets to the player rather than to the game's developers. Blockchain technology gives players the ability to save in-game purchases, sell them to other players, or transfer them to other games. This interoperability is achieved through blockchain technology - for example, two games created on the Ethereum network can support the same in-game assets, such as vehicles, armor, or even entire characters.

So, NFTs can easily exist independently from a particular gaming platform, and even if the game ceases to exist, the player still owns the character and acquired NFT assets and can use them in a different environment. In addition, blockchain-enabled game assets cannot be duplicated or forged because an immutable record is created when each NFT is released. 

 

Real Estate

With the development of the metaverse, the new type of property is becoming increasingly popular alongside traditional "physical" real estate: virtual real estate offers the opportunity to buy land, islands, houses, apartments, and even famous landmarks in the metaverse. Often, ownership of digital real estate is linked to ownership in the real world.

After a customer purchases virtual property, the transaction is recorded in the blockchain and the NFT is transferred to the buyer's digital wallet. After the wallet is connected to the metaverse platform, it authenticates the land ownership. Owning virtual property allows users to participate in the governance of the metaverse platform - for example, to vote on initiatives for its further development.

Physical real estate NFTs are created by registering a physical asset (your home, land, or commercial property) on a blockchain. The metadata of each NFT is anchored in a blockchain, which serves as verifiable proof of authenticity and ownership. It confirms ownership, identifies encumbrances on the property, and empowers more efficient processing of transactions. In addition, an NFT can only have one official owner at a time, it cannot be forged or altered. Smart contracts in each NFT automate all actions, creating faster and more efficient transactions without intermediaries and tons of paperwork.

 

Advertising and marketing

NFT provides companies with a new way to promote their business, advertise new products and reach new audiences. It connects brands with the customers by valuable and unique assets that ensure a long time loyalty. Non-fungible tokens help develop more personal and effective brand-customer interactions by enabling consumers to express their opinions and vote on future brand events, new product lines and potential services.

Offering digital collectibles can greatly increase brand awareness, allowing companies to build a community of fans and collectors around their products and services.

Numerous brands are already using this technology to attract public and media attention to their products. Collaboration of famous brands with NFT artists or marketplaces is quite a popular way of promotion these days. 

And of course, the metaverse should be mentioned again. Companies can test the demand for their future products by creating a virtual prototype and getting feedback before launching the product in reality. Virtual spaces offer the possibility to place ads on billboards, banners and even on the metaverse-radio.

Marketing industry experts believe that the most powerful strategy for brands promotion is a combination of social media and NFT technology.

 

Healthcare

Currently, healthcare providers in most cases use outdated methods of storing medical records, which leads to high costs, slow record retrieval and low security, which often leads to loss or sale of patient medical data. However, some companies have already leveraged the benefits of blockchain, keeping patient’s medical records as NFTs.

By storing medical data on the blockchain, patients will be able to own their records without third-party involvement, making all processes related to medical care more efficient and secure. Patients will be able to track their medical data and hold those who use it without their consent accountable. They will also be able to get paid every time they give the company permission to use their data for research, building marketing strategies or new products. Tools such as the decentralized software engine used by clinical trial platforms and healthcare organizations already exist to track and process patient consent for clinical trials via NFTs.

Blood donation organizations use non-fungible tokens to track a blood donation through their system. A digital "blood bank" can be created by registering blood through its NFT, with the blockchain system tracking demand for specific types of blood to deliver them quickly to where they are needed.

 

Some final words

The use of NFT technology is certainly not limited to the areas mentioned above, and there is no doubt that with the development of Web-3, despite all the concerns and hype, NFT will continue to be integrated into various aspects of our lives, perhaps even in the most unexpected ways.

We are ready to help companies that recognize the need to build the foundation for their presence in the new digital reality, where there is no more room for paperwork and old-fashioned document flow, boring and monotonous presentations and non-transparent transactions. The Inmost team has a successful track record in developing NFT, and blockchain is one of our core competencies.

 

Amazon Transcribe Medical

Digitization of the healthcare sector

In recent years, the healthcare sector has begun to actively embrace modern digital solutions - from telemedicine applications, connecting residents of the most remote, “hard-to-reach” regions to world-class medical services, to the use of sensors and devices that help remotely monitor and record patient physical data such as: heartbeat, blood pressure, movement and even behavioral patterns. The unique challenges of Covid-19 have played a decisive role in accelerating the digitization of healthcare, when it became clear that many processes in the healthcare sector require a fundamental transformation.

Currently, medicine has a variety of digital tools to improve communication, administrative and operational processes, data storage and transition.

One such tool that facilitates the work of medical professionals is Transcribe Medical Service from Amazon.

 

What is Amazon Transcribe Medical?

In the past, writing paper reports took doctors a lot of time. And after the beginning of digital transition there is a standard requirement for healthcare providers to enter medical records into Electronic Health Record (EHR) systems on a daily basis. According to a study held by the University of Wisconsin and the American Medical Association in 2017, primary care physicians in the United States spent up to 6 hours a day entering this data.

In 2019, Amazon launched a service built on top of the Amazon Transcribe. It was designed specially for healthcare professionals to transcribe medical-related speech, such as physician-dictated notes, drug safety monitoring, telemedicine appointments and consultations, or conversations of doctors with patients. 

The Amazon Transcribe Medical service uses machine learning and natural language processing (NLP) to accurately convert audio speech or conversation to a text. It is trained to understand complex medical language and special terms and measurements used by doctors. Developers can use Amazon Transcribe Medical for medical voice applications, by integrating with the service’s easy-to-use APIs. Pharmaceutical companies and healthcare providers can use Amazon Transcribe Medical to create services that enable fast and accurate documentation. 

The service can transcribe speech as either an audio file or a real-time stream, the input audio can be in FLAC, MP3, MP4, Ogg, WebM, AMR, or WAV file format. Streaming transcription is available in US English, it can produce transcriptions of accented speech, spoken by non-native speakers.

This service provides transcription expertise for primary care and specialty areas such as cardiology, neurology, obstetrics-gynecology, pediatrics, oncology, radiology and urology. Transcription accuracy can be improved by using medical custom vocabularies.

 

Transcribe Medical use cases

Medical dictation: medical specialists can record their notes by speaking into the microphone of a mobile device during or after interacting with a patient, being able to reduce the administrative workload and focus on providing quality patient care.

Drug safety monitoring: transcribing of phone calls regarding drug appointment and side effects enables more safe service provisioning by pharmaceutical companies and clinics. 

Transcribing of conversations: recording conversations between a doctor and a patient in real time without disrupting the interaction, allows healthcare providers to accurately capture details such as mentioned symptoms, medicine dosage and frequency, side effects. This information can be processed through subsequent text analytics and then entered into Electronic Health Record (EHR) systems.

In case of online video or phone consultations Channel Identification feature can be used. This is a powerful tool allowing to independently transcribe the patient and clinician audio channels and provide real-time conversational subtitles.

 

Benefits of Amazon Transcribe Service

Amazon Transcribe Medical benefits a wide range of healthcare specialists: nurses, physicians, researchers, insurers, and pharmaceutical companies - as well as their patients. The following features make it highly attractive to clinicians and healthcare professionals:

HIPAA (Health Insurance Portability and Accountability Act) eligible: providing support for the automatic identification of protected health information (PHI) in medical transcriptions Amazon Transcribe Medical reduces the cost, time, and effort expended on identifying PHI content through manual processes. PHI entities are labeled clearly with each output transcript, making it convenient to build additional downstream processing for a variety of purposes, such as redaction prior to text analytics.

Highly accurate transcription: the narrow specialization of the service, exclusively aimed  at the needs of the healthcare sector, ensures that even the most complex medical terms, such as the technical names of diseases and medicines, are recorded correctly. 

Improving the patient and practitioner experience: so that the doctor does not have to waste time taking notes and writing reports, but can focus on the patient, accurately transcribing all the details of the consultation or conversation without disrupting the interaction.

Easy to use: no prior knowledge or experience with machine learning is required. Developers can focus on building their medical speech applications by simply integrating with the service's APIs. Transcribe Medical handles the development of state-of-the-art speech recognition models.

Thus, Amazon keeps investing into the medical sector, empowering healthcare and life sciences, and enhancing the number of digital services to deliver patient-centered care, accelerate the pace of innovation and unlock the potential of data, while maintaining the security and privacy of health information.

 

Our experience

With extensive experience in building healthcare applications based on Amazon services and developing long-term partnership with global leaders in telemedicine technologies & services, we, Inmost Company, took the opportunity to ease the burden of reporting and documentation for our clients by integrating Transcribe Medical into the application for remote medical consultations. This has significantly optimized medical staff workload, streamlined processes, and increased positive feedback from patients.

Based on these experiences, we consider Amazon Transcribe Medical Service to be a really important and useful tool for transforming medical services. 

And, of course, we are ready to support healthcare organizations on their digital transformation path by providing consulting services, renovating and improving existing platforms or developing efficient and reliable solutions from scratch.

 

Zigbee

According to Statista the quantity of IoT devices is about 43 billion now.

And by 2025, 75 billion IoT devices are predicted to be online and Statista predicts that rather a lot of those devices will be in areas that lack a standard connection.

The future of IoT will be built through open networks and collaboration. Until the future has not come, let's discuss the variants of connection for nowadays.

I think there is no need to mention BLE, Wi-Fi, or 5G. There is no competition between these networks – rather, they are complementary.

Let’s speak about Zigbee. What is this technology different from above mentioned?

Zigbee and "what it is eaten with"

Zigbee is a standards-based wireless technology developed as an open global market connectivity standard to address the unique needs of low-cost, low-power wireless IoT data networks. The Zigbee connectivity standard operates on the IEEE 802.15.4 physical board radio specification and operates in unlicensed radio bands including 2.4 GHz, 900 MHz and 868 MHz.

Specifications of Zigbee

The Zigbee specifications, which are maintained and updated by the Zigbee Alliance, boost the IEEE 802.15.4 standard by adding network and security layers in addition to an application framework.
In theory, it enables the mixing of implementations from different manufacturers, but in practice, Zigbee products have been extended and customized by vendors and, thus, plagued by interoperability issues. In contrast to Wi-Fi networks used to connect endpoints to high-speed networks, Zigbee supports much lower data rates and uses a mesh networking protocol to avoid hub devices and create a self-healing architecture.

There are three Zigbee specifications: Zigbee PRO, Zigbee RF4CE and Zigbee IP.

Zigbee PRO aims to provide the foundation for IoT with features to support low-cost, highly reliable networks for device-to-device communication. Zigbee PRO also offers Green Power, a new feature that supports energy harvesting or self-powered devices that don't require batteries or AC power supply.

Zigbee RF4CE is designed for simple, two-way device-to-device control applications that don't need the full-featured mesh networking functionalities offered by the Zigbee specification.

Zigbee IP optimizes the standard for IPv6-based full wireless mesh networks, offering internet connections to control low-power, low-cost devices.

Mesh network

Mesh networks are decentralized in nature. It’s flexible, reliable and expandable - End Node, Router or Coordinator, where nodes can communicate peer-to-peer for high speed direct communication, or node to Gateway.

Zigbee and Z-wave are two well-known mesh networking technologies. In a mesh network, nodes are interconnected with other nodes so that multiple pathways connect each node. Connections between nodes are dynamically updated and optimized through sophisticated, built-in mesh routing tables.

Security

Zigbee is inherently secure. It provides options for authentication and data encryption. Zigbee uses 128-bit AES encryption keys, similarly to its primary competitor, Z-Wave (all pros and cons of Z-wave will be considered in the next article).
This plus short-range signals make Zigbee secure. However, most home automation protocols have similar levels of security when you configure them properly. 

Power consumption

Power consumption for Zigbee is comparable with BLE. However, the proven, routed mesh mechanism adopted in Zigbee makes it slightly more power efficient.

What Is Zigbee Compatible With?

The devices are controlled by Samsung SmartThings and Zigbee. Amazon Echo Dot,  Philips Hue, IKEA Tradfri. Hive Active Heating is a device that uses natural gas and has accessories. Honeywell manufactures a variety of thermostat products.

Conclusion. Why choose Zigbee?

Comparing Zigbee with existing variants of connections, it’s obvious that Zigbee offers multiple advantages over Bluetooth.

For example, BLE works best for smaller size packets (i.e. less than 12 bytes). For smaller size (less than 12 bytes), its comparable to Zigbee but as packets size starts increasing BLE higher layers do the fragmentation and cause latency to increase.

However, BLE has a cost advantage over Zigbee. BLE mesh has a bigger eco system and uses the same BLE chipset used in other applications, therefore high scale production of BLE chipsets pulls down the cost of IC compared to Zigbee.

Need of gateway device for Zigbee further increases the cost of the overall system. BLE based systems can provide limited functionality (everything except full-fledged internet connectivity) without a gateway as well. In addition, licensing of Zigbee is more expensive and complex than BLE.

Meanwhile, Zigbee is more cost-effective and uses significantly less energy than Wi-Fi, resulting in better battery life. To speak about another “rival” LoRaWAN, it’s significantly cheaper than Zigbee and  they are close by some characteristics.

And if you are looking for a cheap and  long battery life sensing project, where no real-time, control or automation requirements are anticipated and slower poll-rates are suitable, then LoRaWAN is a good contender and is a good choice for many entry-level sensing applications.
But, if it is necessary to control automation or faster poll rates, it’s better to step up to Zigbee. As it was mentioned Z-Wave will be considered next time.

Where to use?

The Zigbee wireless communication system is used by homes, businesses, and other locations to communicate.
Zigbee can transmit data over a long distance, which is sufficient for most applications. Zigbee is a clear winner for industrial applications that require reliability, real-time monitoring, control or automation and this protocol is highly under-rated for low power sensing.

 

 

NFT – the most contradictory component of the Web 3

There are probably no people left who have not heard of NFTs yet.

Being a vital component of Web-3: the next iteration of the Internet, along with the Metaverse and De-Fi, NFTs evoke, perhaps, the most contradictory feelings in society - from enthusiasm, sometimes bordering on insanity, to outright hostility and harsh criticism.

NFT - a non-fungible token - is a unique unit of data that is verified and stored in the blockchain and can be linked to digital or physical assets to provide immutable proof of ownership. Blockchain technology allows NFTs to be tracked in an immutable digital ledger that provides a history of assets and can be verified at any time. So, NFTs cannot be replicated, destroyed, or counterfeited.

NFTs are primarily created on Ethereum, but other blockchains support them as well.

For selling NFTs, they must first be minted. Minting an NFT means converting a digital file into a digital asset that can be published and stored on the blockchain, making it available to potential buyers. The minting process is not free - you need a crypto wallet and a certain amount of crypto currency to cover the Ethereum "gas fees." The most popular NFT marketplaces on the Ethereum blockchain are: OpenSea NFT, Rarible, and Mintable.

Today, almost anything can become an NFT: paintings, photos, videos, music, gifs, memes - any kind of unique art that can be represented digitally. Or it can even be real estate, collectibles, event tickets, website domains or tweets.

Famous auction houses Christie’s and Sotheby’s have already made sales of NFT artworks for several hundred million dollars.

The first experiments with NFT started back in 2013, but the wave of hype rose only in 2021, and sometimes it looked like real insanity. So, the creator of the Nyan Cat meme received $580,000 in cryptocurrency for a gif with the famous cat meme, and the digital artist Beeple sold the token of jpeg collage - Everydays: The First 5000 Days, for $69.3 million.

NFT technology is actively used by both well-known and not yet recognized artists. The main factor in the growing popularity of NFT is the opportunity for a beginner to present his or her work to a wide audience. A few years ago, a new artist had to work hard for several years before reaching the first serious exhibition, and still success was not guaranteed. Today it's enough to convert your painting into a digital format, create a corresponding NFT token (it's not that complicated) and sell it for real money.

Actually, NFT technology can be used for transactions with any digital assets, however, the recent trends show a growing interest in selling real things as NFTs. These can be, for example, sculptures, antiques, a coin collection, etc. But if converting paintings into a digital format is a common thing today, how can a real physical object become an NFT?

The first way is to create a 3D digital copy of the object. Technologies that allow the average person to create such copies are becoming more and more available. And of course, it attracts a lot of interest from businesses and corporations that have been already using and investing in 3D technologies to promote their brands and products not only in the real world, but also in the virtual world of the metaverse. In addition, NFTs of 3D objects are expected to replace our favorite real-world things, objects, and assets in the metaverse, making it even more similar to our everyday environment.

But what if we take objects that are difficult to digitize? Last month, for example, a three-bedroom house in South Carolina was sold as an NFT for $175,000. The buyer indicated that he was able to make the transaction for that property with just one click. How does it work?

In simple terms, the selling company creates an NFT that represents ownership of the house. Those who buy this NFT become the owner of the property. Despite the fact that the purchase is made digitally, the ownership is considered absolutely real - whoever owns the NFT owns the house in the real world.

Although such transactions are still viewed with suspicion by the majority, there are serious reasons to believe that NFT technology could open the door to a decentralized economy without intermediaries such as banks or a government. In the future, it may completely change the rules of the markets.

Today, besides the arts and real estate, the most potential for NFTs have gaming, education,  healthcare, supply chain and logistics industries. NFT tokens can be used to confirm any important document: a diploma, health records, a marriage registration certificate, etc.

A serious barrier to NFT adoption into the mentioned areas is the lack of government regulation. And it is very likely that in case of a fraud or hacker attack, the affected party will not be able to recover its losses.

Moreover, NFT technology faces many other challenges today. For example, one of the main arguments against NFT is its huge energy consumption and extremely negative impact on the environment. However, after Ethereum has switched from a power-intensive Proof-of-Work protocol to a mining-free Proof-of-Stake, it is possible that NFT will become more eco-friendly and increase its audience.

Since NFT has both - the devoted fans and haters, it remains one of the most hype-boosting components of web-3, and is mentioned every now and then in the news and social media, either along with the figures containing impressive number of zeros, or along with facts that cause a no less impressive number of questions and misunderstandings.

 

Transcribe

Transcribe Service was launched by Amazon in 2017 enabling developers to implement a speech-to-text feature to their applications.

Analyzing and data extraction from audio files is almost impossible for computers. To use such data in an application, speech must first be converted to text. Services performing speech recognition technologies have certainly existed before, but they were generally expensive and poorly adapted to various scenarios, such as low-quality phone audio in some contact centers.

Powered by deep learning technologies, Amazon Transcribe is a fully managed and continuously trained automatic speech recognition service that automatically generates time-stamped text transcripts from audio files. The service parses audio and video files stored in many common formats (WAV, MP3, MP4, AMR, Flac, etc.) and returns a detailed and accurate transcription with timestamps for each word, as well as appropriate capitalized words and punctuation. For most languages, numbers are transcribed into a word form, however for English and German languages Transcribe treats numbers differently depending on the context in which they're used.

Now Transcribe supports 37 languages.

Transcription methods can be divided into two main categories:

  • Batch transcription: transcribing media files that have been uploaded into an Amazon S3 bucket;
  • Streaming transcriptions: Transcribe media streams in real time.

 

Here are some of the features it provides:

  • Single and multi language identification: identifying the dominant language spoken in your media file and creating a transcript. If speakers change language during a conversation, or if each participant speaks a different language, your transcription output correctly detects and transcribes each language;
  • Transcribing multi-channel audio: combines transcriptions from multi channel audio into a single output file. It is possible to enable channel identification for both batch processing and real-time streaming;
  • Speaker diarization: the partition of the text from different speakers, detecting each speaker in the provided audio file;
  • Custom language models: designed to improve transcription accuracy for domain-specific speech. This includes any content that goes beyond the everyday type of conversations. For example, an audio recording of a report from a scientific conference will obviously contain special scientific terms that standard transcription is unlikely to be able to recognize. In this case, you can train a custom language model to recognize the specialized terms used in your discipline;
  • Custom vocabularies: are used to improve transcription accuracy for a list of specific words. These are generally domain-specific terms, such as brand names and acronyms, proper nouns, and words that Amazon Transcribe isn't rendering correctly;
  • Tagging: adding custom metadata to a resource in order to make it easier to identify, organize, and find in a search;
  • Subtitles: can be used to create closed captions for your video and filter inappropriate content from your subtitles.

 

Transcribe offers indispensable features for call centers and support services. It helps to capture useful insights by transcribing customer calls in real time. Analyzing and categorizing calls by keywords, phrases and sentiment can help track negative situations, identify trends in customer issues or allocate calls to specific departments.

It is possible to measure the volume of speech. This metric helps to understand if the customer or employee is talking loudly, which is often an indication of being angry or upset. The quality of communication with the client can also be determined by setting the following metrics: interruptions, non-talk time, talk speed, talk time.

Besides call-centers, Transcribe Service can be useful in almost any field: education, law, e-commerce, and many others. For example, Amazon Comprehend Medical is a machine-learning-powered HIPAA-eligible service pre-trained to identify and extract health data from medical texts, such as prescriptions, procedures, or diagnoses. 

It is difficult to imagine modern technologies without a service that can transform speech into text. And of course, Transcribe has analogues from other digital giants. However, it is worth noting that a large number of developers who have leveraged Amazon service, admit a much higher quality and accuracy compared to similar solutions provided by the current market.

 

 

NVIDIA chipsets for IoT

We have already discussed the chipsets we worked with in one of our projects with static IoT devices. Now time is coming to know more about chipsets for moving IoT devices.  So, NVIDIA chipsets why the Customer gave his heart to it. 

NVIDIA boards became famous and got a reputation among gamers and graphics designers (GeForce series) a time ago, and now NVIDIA has Jetson series.

The first board was the TX1 released in November, 2015.  Now NVIDIA has released the more powerful and power-efficient Jetson TX2 board.

The Jetson boards are siblings to NVIDIA’s Drive PX boards for autonomous driving and the TX2 shares the same Tegra “Parker” silicon as the Drive PX2.

There are many synergies between the two families as both can be used to add local machine learning to transportation. The Drive PX boards are designed for automotive with extended temperature ranges and high reliability requirements. The Jetson boards are optimized for compact enclosures and battery power for smaller, portable equipment.

With devices such as robots, drones, 360 cameras, medical, etc., Jetson can be used for “edge” machine learning.  The ability to process data locally and with limited power is useful when connectivity bandwidth is limited or spotty (like in remote locations), latency is critical (real-time control), or where privacy and security is a concern.

Another innovative solution from NVIDIA - Jetson Nano.

Jetson Nano development board is also a powerful small artificial smart computer, which only needs to insert a MicroSD card with a system image to start, built-in SOC system-level chip, can have a parallel hand, such as Tensorflow, Pytorch, Caffe / Caffe2, Keras, MXNET and other neural networks that can be used to achieve functionality such as image class, target detection, speech segmentation, and intelligent analysis. Usually used to build autonomous robots and complex artificial intelligence systems.

The Customer had chosen this chip for his moving device, because it was extremely important to detect obstacles and define direction. All the tasks were covered by the chipset functionality rather successfully.

You may ask why not to choose Raspberry Pi  all the more reasonably priced by the way.

Raspberry was considered as an alternative. In fact, they are actually very similar in primary functions, and all can develop some special functions, such as ARM processors, 4GB RAM, and a series of peripherals.

As for video-out: the Nano has both HDMI 2.0 and DisplayPort available, which can be used at the same time. The Pi is limited to either its HDMI port or its proprietary display interface, which as far as we at Inmost know cannot be used simultaneously.

They both have multiple ways of interfacing, including I2C, I2S, serial, and GPIO, but we also appreciate that the Nano has USB3.0 and Gigabit Ethernet.

However the biggest difference is that the Raspberry Pi has a low power VideoCore multimedia processor, and Jetson Nona contains higher performance, more powerful GPUs (graphics processors), which makes it support some functions that Raspberry Pi Can't do. Then Jeston Nona makes some more depth developments possible and has more potential in development.

For our customer's project, fast processing of video from the camera is the number one task, so it was clearly decided to use Jetson Nano to solve this problem.

The NVIDIA Jetson system is high performance and power-efficient, making it one of the best and most popular platforms for building machines based on AI on the edge (Edge Machine Learning).

Private Blockchain

“Our virtues are generally but disguised vices” – La Rochefoucauld

 

Why at all do we need a private blockchain?

One of the blockchain types supported by Amazon is HyperLedger Fabric. There is a special Amazon Managed Blockchain service for Hyperledger Fabric on the AWS platform, that simplifies the work related to setting up blockchain networks and reduces the time to deploy solutions based on it. Hyperledger Fabric is a private blockchain, so let's first look closer: what tasks can a private blockchain perform in general?

As you know, a public blockchain has three main properties:

- Decentralization - there is no single node or a dedicated group of nodes that store any information separately - information is duplicated in an amount equal to the number of users in the system;

- Transparency - every user has access to the entire database and can track all changes;

- Reliability - all the records form a chain, and each new record is linked to previous ones by a special mathematical function that depends on the data in the previous elements of the chain. This ensures that the data cannot be changed retroactively.

 

All these components together allow you to build an information storage system where each individual element (user) is untrusted, but in combination, they form a trustworthy repository.

Why is this concept not suitable for an enterprise environment?

First of all, the lack of user identification. This is especially critical when performing financial transactions in an enterprise environment. In 2016, the concept of KYC (Know Your Customer) appeared in the official documents of the Department of the Treasury to combat financial crime FinCEN USA, which requires financial institutions to identify their customers before allowing them to conduct financial transactions.

In addition, in 1989, the FATF introduced the principles of AML (Anti-Money Laundering) - measures to combat money laundering. And these principles require user identification. Thus, there are powerful arguments why an enterprise blockchain should be private but not public.

Is this the only difference? If we create a blockchain network with access to only authenticated users, then is it possible to use other items available in the public blockchain architecture for an enterprise system? No, it is not.

On a public blockchain, we use various consensus mechanisms to validate a transaction that adds a new block of data to the chain. All of these mechanisms rely on all network users participating in the validation process and receiving reward for this participation in one way or another.

And to pay this reward, each public blockchain invents its own cryptocurrency. Cryptocurrencies and public blockchains cannot exist without each other. Currently, there are over 10,000 different cryptocurrencies in the world. This amount significantly exceeds the number of fiat currencies, the value of which is guaranteed by the states that issue them. It is quite difficult to release a new cryptocurrency that has at least some value to the public.

This idea is not suitable for a corporate network. For two reasons:

- First, keeping a complete copy of the entire database on the computers of every employee with access to the corporate network in order to participate in the consensus will not cause enthusiasm among the security services in any corporation, no matter how powerful encryption algorithms are protecting information;

- Second, the idea of ​​introducing an internal cryptocurrency in the corporate network also seems strange.

 

This leads to the conclusion that a private blockchain needs a consensus mechanism based on a centralized algorithm.

So, what is left of the original idea? First, we consistently abandoned decentralization, although the decentralization level of public blockchains based on the Proof-of-Stake consensus mechanism is very doubtful, and then transparency was dropped. In the end, only “all records form a chain and each new record is linked to the previous through a special mathematical function that depends on the data in the previous elements of the chain. This ensures that the data cannot be changed retrospectively.

 

In fact, this is really not so few. We have obtained reliable data storage located in the corporate network and the information in it cannot be faked, no matter what the access rights of the person who wants to do it. And due to the fact that there is a central node or a set of nodes responsible for confirming transactions, the recording of information is significantly accelerated compared to public blockchains. There are many applications for such reliable storage with fast access in corporate networks. We will look closer at some of them soon, and will talk about how Hyperledger Fabric solves this problem.

 

DeFi

We have mentioned the decentralized finance system - DeFi, talking about the trading metaverse - Metafi (http://www.inmost.pro/blog/metafi-first-social-trading-metaverse/). 

As one of the hottest topics in the crypto world over the last few years, Defi is definitely worth a closer look.

DeFi is a global blockchain-based financial system built to meet the needs of the new Internet iteration - Web-3. It is an alternative to tightly controlled traditional systems with outdated infrastructure and processes. It allows you to control and have direct access to your money. DeFi eliminates the fees that banks and other financial institutions charge for using their services. People store money in a secure digital wallet and funds transfer takes only a few minutes. It also provides access to global markets and creates alternatives to local currency or banking solutions. Any traditional services provided by financial institutions can be expected to be offered through DeFi. 

While not everyone has the ability to open a bank account and use traditional financial services, anyone with access to the Internet can use services offered by DeFi products. Currently, tens of billions dollars in cryptocurrencies have flowed through DeFi programs, and the number of transactions is increasing every day.

DeFi markets are always open and there is no centralized authority limiting their working time, blocking payments or denying access. This decentralization aspect is considered to be one of the main advantages of DeFi.

To provide services without third parties, DeFi uses cryptocurrencies and smart contracts, transferring trust from intermediaries to machine algorithms.

A smart contract is a self-executing contract where the terms and conditions are defined and applied through automation and approved autonomously and efficiently on the blockchain. No one will be able to change a smart contract once it is launched: it will always work as programmed. Smart contracts are public, so anyone can view and monitor them. This means that the community will be able to quickly detect a compromised contract and react accordingly.

Security, privacy and transparency of DeFi services also base on fundamental advantages of blockchain as the records of information in chain blocks cannot be changed or controlled by any authority.

Even though most DeFi services are now built on Ethereum, Bitcoin was the real DeFi pioneer, giving the ability to own, control, and send assets anywhere in the world. Bitcoin is open to everyone and no one can change the rules. Its concepts really differ from the traditional financial world, where governments can print money and devalue your savings, and companies can shut down markets.

Now, Ethereum is the ideal foundation for DeFi. Most of DeFi products are actually powered by Ethereum. Therefore, many of them can be easily configured for interaction. You can borrow tokens on one platform, and exchange them on another market and in a completely different program. Tokens and cryptocurrency are written into the Ethereum blockchain, and a shared ledger that tracks transactions and ownership is one of Ethereum's unique features.

Like every other system DeFi is composed of different parts. Its infrastructure consists of layers that are responsible for various processes and guarantee the smooth functioning of transactions and contracts:

  1. Settlement Layer: also called Layer 0. Based on Ethereum blockchain it serves as a foundation for all DeFi transactions, writing code or building applications. This is the vital component - the DeFi system can not exist without blockchain.

  2. Protocol Layer: defines rules and standards for all DeFi transactions, it is a description of the specific conditions necessary for the code to run accurately and fulfill its tasks. All the protocols are interoperable and can be used to create any application in the DeFi ecosystem.

  3. Application Layer: consists of decentralized applications or dApps - products created on the basis of two previous layers that serve as a kind of front-end in the DeFi ecosystem, enabling consumers to use DeFi services. With dApps you can buy, sell, trade, lend, and borrow cryptocurrencies on a decentralized network.

  4. Aggregation Layer: at this layer third-party vendors create end-to-end solutions by bringing together existing decentralized applications and offering users and investors a wide range of financial services in one place.

 

The list of DeFi services is constantly growing, here are some of them:

  • Money transactions around the world

  • Access to stable currencies

  • Loans

  • Deposits

  • Trading

  • Investments

  • Insurance

However, despite the great financial freedom offered to users, serious challenges regarding DeFi still exist. For example, a lack of consumer protection. DeFi is free of rules and regulations. But it means that users often have no legal protection if something goes wrong. There are no government reimbursement systems for DeFi and no laws requiring capital reserves for DeFi service providers.

The problem  is that all the rules and restrictions which could potentially protect the user do not apply to the decentralization concept. So, the path forward may be unclear, but it will certainly be important for DeFi investors to monitor the evolution of the regulatory environment for this new financial sector.

Despite all the concerns and the so far insufficient resistance to hacker attacks, DeFi would gradually break the monopoly of traditional financial institutions and decrease the cost of traditional financial services by removing barriers and giving everyone equal access to the financial infrastructure.

 

 

ESP32 Overview

IoT hardware is at the heart of every connected project. 

However, choosing the IoT hardware exact for your project can be overwhelming due to the sheer number of development boards and modules in the space. 

In our practice we ruled by the Customer choice. 

However doubtlessly, it is useful to know more about the board's specifications and possibilities. 

The one of the chipset we worked with in our projects is ESP32 by Espressif Systems.

Espressif is a fabless semiconductor company that develops Wi-Fi and Bluetooth low-power IoT hardware solutions. 

They are most well-known for their ESP8266 and ESP32 series of chips, modules, and development boards. 

In fact, many development boards across the industry run on Espressif chips (like Sparkfun’s development kits).

Espressif development boards are designed for simple prototyping and interfacing but can be used as a simple proof of concept or enterprise solution. Espressif also offers several software solutions designed to help you manage devices around your home and integrate wireless connectivity into products. Some of the IoT development boards they offer are:

2.4 GHz Wi-Fi & BT/BLE Development Boards  —  These boards provide PC connectivity, 5V/GND header pins, or 3V3/GND header pins ESP-IDF source code and example applications. These boards support everything from image transmission, voice recognition and come with a variety of possible features, such as onboard LCD, JTAG, camera header, RGB LEDs, etc.

2.4 GHz Wi-Fi Development Boards  —  Standard set of development boards that integrate the commonly-used peripherals.

As was mentioned you can surely use ESP32 for prototyping/establishing Proof of Concept (PoC). If you need to use several devices, ESP32 is perfect for your app.

One of the major advantages of ESP32 is the presence of inbuilt WiFi and Bluetooth stacks and hardware. 

Therefore, ESP32 will be your choice of microcontroller in a static application where good WiFi connectivity is guaranteed, say an heating equipment monitoring application in, say, a static appliance. The presence of WiFi stack on the module itself means you will have saved money on an additional networking module. 

However, if you use ESP32 in an asset tracking application, where it keeps moving around, you will have to rely on a GSM or LTE module for connectivity to the server (because you will not be guaranteed WiFi availability). In such a scenario, ESP32 loses the competitive advantage. We will discuss a more suitable board for moving devices next time.

To recap, ESP32 has specs that are good enough to accommodate most of your applications. When scaling up production, you need to just make sure that the specs are not too much for you.

In other words, if you can get the desired output with modest specs, you may be better off using a cheaper microcontroller and save money. These savings become significant when your production numbers increase by orders of magnitude.

However, production aside, ESP32 is definitely the ideal microcontroller for prototyping and establishing the PoC. That was the reason, why our customer preferred this board for his prototype.

 

 

Dogami – petaverse

Remember Tamagochi - a digital pet from the past? Perhaps, the popularity of this toy, which seems quite primitive today, lies in the possibility of adopting a pet, even for those who are unable to do so for whatever reason.

For example, many people like dogs, but few of them have enough time to spend with the pets and are able to create all the necessary conditions for their care. Soon this problem can be solved by a quite realistic and colorful petaverse - Dogami.

Players (Dogamers) can adopt virtual dogs, play games and compete with others. It will be possible to interact with the pet through augmented reality using the Dogami app, available for any type of smartphone on iOS or Android.

According to developers, Dogami petaverse roadmap includes the implementation of lands saling and the dogs crossbreeding - there will be an opportunity to mint a new puppy NFT with a random gender from two virtual pets NFTs.

The game is built on the Tezos blockchain which provides low gas fees, fast data processing and, as the developers assure, clean NFTs - having very low carbon footprint due to the most energy-efficient blockchain technology. The utility token of petaverse ‘$DOGA’ can be used to raise your dog, buy event tickets, consumables and to create your digital wardrobe purchasing virtual accessories and luxury items such as caps, bucket hats, varsity jackets, bandanas, beds, bed pillows, hoodies and belt bags in the marketplace. 

By the way, Gap Inc. - American worldwide clothing and accessories retailer teamed up with  Dogami to launch the first fashion collaboration in the petaverse. Each item of the collection will be available in the game to create an individual style for your virtual pet. 

And, of course, it's worth mentioning the play-to-earn concept of Dogami. You can earn  ‘$DOGA’ completing day challenges and being rewarded for multiple days of consistent play. The amount of earnings strongly depends on your game level - at level 1, you can only earn a maximum of 5 $DOGA a day, but by level 10  it is already up to 50 $DOGA.

On September 14, first 100 Dogamers got the chance to participate in the early stage of the game and support Dogamí development. 

Even though Dogami was not yet fully launched, it already has a huge amount of attention and positive feedback, since the theme of the game has a really large audience. In this case you can only imagine the potential of the Metaverse for cats!

 

MetaFi – first social trading Metaverse

We have already written a lot about metaverses. We talked about metaverses where you can discover new lands, farm, live in a giant skyscraper, and even about a pet metaverse. We have mentioned that the fashion, entertainment, and sports industries are actively investing in their meta-future. After all, they understand the opportunities of brand promotion in the virtual world, because the metaverse is not just a realistic 3D virtual game, but a huge ecosystem with immense potential and a rapidly developing economy. According to industry experts, the Metaverse will far surpass the real world economy in the coming years.

The metaverse economy is built on cryptocurrencies - decentralized digital money based on blockchain technology. Decentralized Finance, or DeFi, is a platform that allows investors to trade financial products over a decentralized blockchain network.

Transactions do not require intermediaries such as banks or brokers. DeFi accepts investors who do not have a government ID, brokerage or bank account, proof of residency, or social security card.

What happens when you combine the Metaverse and DeFi? Welcome to MetaFi!

The developers of MetaFi claim that the current market products are not able to make Web3 truly decentralized: it is actually being traded, and talked about, on Web2. MetaFi changes this forever by making trading truly social on Web3.

The MetaFi World is divided into futuristic trading zones focused on each kind of asset -  you can trade tokens, NFTs, tokenized stocks, commodities, bonds.  

The $METAFI token is hyper-deflationary by design: the deflationary mechanism is based on the fact that as more users enter the metaverse and trade, more fees will be generated that will be used for activities that reduce MetaFi's circulating supply (buying and burning, providing liquidity, placing bets).

To enter the MetaFi world, you will need to create your virtual avatar - MetaFi Citizen and connect it with your crypto wallet.

One of the most  important features of MetaFie is the ability to communicate - participants can chat, exchange text and voice messages, images, share  knowledge and show their trades in real time. In this world, everyone is trading together and trying to beat the market using collective thinking, research and technical analysis.

“Most trading platforms compete with nearly identical products. MetaFi reimagines the trader’s experience: making trading fun, engaging, and social. With the Trading World, MetaFi will aggregate major decentralized protocols with deep liquidity, wrapped in a new gamified trading environment, designed to be a seamless onramp for non-crypto audience.”- Matt Danilaitis, Founder of MetaFi said. 

This month, MetaFi announced the successful completion of a $3 million funding round. So, now  ​​the company with its web-3 virtual trading platform is valued at $25 million.

The booming interest in cryptocurrencies and metaverses, as well as the opportunity to improve trading skills, communicating and exchanging experience with other participants, make MetaFi very attractive to the crypto community. Currently there are over 100,000 participants on the waiting list.

 

Is there an alternative to PoW and PoS algorithms?

What is a consensus algorithm in the blockchain? Since decentralized networks require tools to agree on decisions and to ensure overall reliability, a mechanism for coordinating system processes has been developed. This mechanism is called a consensus algorithm. It is a decision-making procedure aimed to prevent a network from centralized control and to ensure that everyone follows the rules.

One of the main differences between various cryptocurrency networks is the type of consensus algorithms used.

In the article about Ethereum Merge (http://www.inmost.pro/blog/ethereum-merge/), we talked about the two currently most commonly used algorithms: proof-of-work and proof-of-stake. Both have their pros and cons, their supporters and opponents. And while the crypto industry rapidly evolves, developers will continue to come up with new solutions searching for the perfect one.

Let us now take a look at what alternative consensus algorithms exist, although they are less popular than proof-of-work and proof-of-stake.

 

Proof-of-Burn (PoB)

Unlike the proof-of-work, this algorithm is not very energy consuming. Miners do not need to invest in physical resources - powerful hardware. Instead, they invest cryptocurrencies (coins) to be selected for mining and validation of a new block. Coins sent for burning can no longer be returned. The more coins burned, the higher the chance of being selected as a validator. The system provides a reward for miners to cover the initial cost of the burned coins within a certain time.

One of the main drawbacks of this algorithm is that it does not really reduce energy consumption, since in most cases bitcoin coins mined with proof-of-work are used for burning.

Also, this algorithm lacks speed, and since it is not yet widely used, its efficiency and security still need to be tested.

 

Proof- of- Authority(PoA)

This algorithm is based on a reputation concept that uses a limited number of block validators. Blocks and transactions are verified by pre-approved participants with confirmed identities who act as system moderators. These system moderators validate blocks and transactions.

To be selected as a validator, a candidate must be trustworthy and have no criminal record. Reputation is a major investment here, as confidence in the identity of the validator ensures the security and reliability of the entire network.

It is clear that this approach, in addition to disclosing access to the identity of the validator, has another significant flaw - centralization. However, this factor makes PoA attractive for large enterprises and private use.

 

Delegated- Proof- of- Stake (DPoS)

DPoS consensus is achieved by voting of the delegates (third parties) authorized by the stakeholders, with voting power proportional to the number of coins each user holds. We can admit that the mechanism of this algorithm also relies on the reputation of voters. In case of suspicion of manipulation or rules violation, the community can replace the delegate at any time.

Delegated-proof-of-stake was designed to be more efficient than the proof-of- stake and proof-of-work consensus algorithms, especially in terms of speed of transaction processing.

One of the main problems with DPoS consensus is the possibility of collusion between delegates. This can lead to centralization of the network and increase vulnerability to attacks.

 

Proof-of-Elapsed-Time (PoET)

This  is an algorithm that prevents high resource usage and high energy consumption. Its concept was invented by Intel. 

Determining the node that gets the privilege of adding a block is a kind of lottery in which all participants of the network have equal opportunities. Each node in the blockchain generates a random wait period and goes asleep for this specified amount of time. The node with the shortest waiting time "wakes up" first and includes a new block into the chain, passing the necessary information to the entire network. The same process is then repeated to find the next block.

The cause of concern regarding PoET is a recently discovered vulnerability in Intel's technology, which serves as a foundation of the protocol. In addition, the reliance of consensus on third-party technologies - Intel, runs against the paradigm that cryptocurrencies are trying to implement with blockchain networks - removing the need for trust in intermediaries.

 

Proof-of-Capacity (PoC)

It allows mining equipment to use the available hard drive space on the network to determine mining privilege instead of using the computing power of the device. The larger the hard drive, the more possible solution values ​​can be stored on it, the higher is the chance for the miner to match the required hash value from his list, providing a better chance of achieving a reward.

Despite the fact that the mining process is part of this protocol, it is considered to be less energy consuming as there is no need for super powerful hardware.The disadvantage of this protocol is insufficient security and vulnerability to malware attacks.

This is far from a complete list of existing consensus algorithms.The listed algorithms are used quite infrequently. They have good potential, but there is still a lot of room for improvement. 

Even though proof-of-work is still the most commonly used algorithm today, Ethereum's recent move to proof-of-stake looks like a bit of a gold rush among companies looking for the perfect consensus algorithm to move the industry forward.

 

Ethereum after Merge – what have changed?

The Ethereum upgrade - one of the most impressive achievements in the blockchain industry, finally has been finished.

"And we finalized it!... Happy merge all! This is a big moment for the Ethereum ecosystem," said Ethereum co-founder Vitalik Buterin in a tweet. 

So, what has changed since Ethereum switched to Proof of Stake, except that ETH costs have dropped 20% in the last 7 days.

Since the preparations for the Merge in the Ethereum community have been ongoing for several years, it is currently unlikely that the event itself will cause significant changes in the overall development of Web-3.

There are prerequisites for a positive trend in the NFT segment - as many artists and users have had antipathy towards blockchain technologies due to the environmental impact of high energy consumption. By switching to proof-of-stake, Ethereum became much more eco- friendly. In fact, less than an hour after the Merge was completed, a user spent 36 ETH - about $60,000 - to mint the first NFT on the proof-of-stake network. It is a panda face image called "The Transition."

At the same time, the eco-aspect had a crushing effect on the miners, who appeared to be the most suffered side of the Merge. It is possible that some miners will choose to mine on another chain instead of selling the gear.

Of course, the biggest concern and criticism of the past-merge Ethereum is that it is moving toward centralization. Proof-of-stake depends on users buying, holding and staking large amounts of the network's cryptocurrency.

And while control of the Ethereum network will no longer be concentrated in the hands of a few publicly traded mining syndicates, critics insist that previous powerful players will simply be replaced by new ones. Lido, a kind of community of validators, controls over 30% of the stake on the Ethereum proof-of-stake chain. Coinbase, Kraken, and Binance - the three largest crypto exchanges - own another 30% stake in the network.

“Since the successful completion of the Merge, the majority of the blocks — somewhere around 40% or more — have been built by two addresses belonging to Lido and Coinbase. It isn’t ideal to see more than 40% of blocks being settled by two providers, particularly one that is a centralized service provider (Coinbase)”- explained Ryan Rasmussen, crypto research analyst.

Since decentralization is the main component of Web 3 concepts, this problem should be solved for the successful development of Ethereum and keeping ahead of competitors in the future.

Therefore, the Merge cannot be considered as the final transformation of Ethereum. The challenge is in keeping upgrading the network to adapt it to the decentralization concept and to increase security and speed.

As Buterin admitted, the Merge is just the beginning. "To me, the Merge just symbolizes the difference between early stage Ethereum, and the Ethereum we've always wanted to become," he said. "So let's go build out all of the other parts of this ecosystem and turn Ethereum into what we want it to be.

No matter how much the traditional financial sector resists the advance of cryptocurrencies, they will inevitably take a dominant position in the future. And there is no doubt that the evolving Ethereum is one of the main pillars of this industry.

 

Tips for successful development process for IoT team

Many sources describe the challenges and failures dealt with by the companies launching IoT projects. 

In this article, I would like to look into this aspect from the perspective of IoT app developers. 

According to the surveys taken by our IoT team that participated in IoT solutions development, we have faced the following issues that we will definitely take into account in the upcoming projects and that may be helpful to other developers:

Communication

Communication is the main factor of our teamwork. In the case of IoT project, it implies not only communication between teammates but communication and correct connection between IoT device and application.

It means that in the process of development, it’s extremely important for the development team to have an IoT device on hand. The IoT device is a must-have for the development team.

Collaboration

No other projects demand collaboration between various teams with a specific expertise. It's an absolutely reasonable approach because it’s impossible to have a satisfying level of expertise in each demanded technology. So, managing IoT project requires virtuoso communication and a clear understanding of which division is responsible for what, as well as a clear understanding of the tasks for each stage and each link in the chain with a clearly defined result of the work.

Project Business Goals

Thus, even in the MVP stage it is extremely important to understand business goals of your project. The main goal of the Internet of Things is to solve a business problem, but not to surprise techno geeks with a cool idea. You just need to concentrate on the problem, and the technology will follow. A clear idea will enable the whole team to find better solutions and build development processes.

Apps for Clients

And one of the most important points for the app development team - the Customer often thinks more about connected devices than about the application itself.

However, it is the application and data that create demand for connected devices, but not vice versa. It’s important to remember that even a tiny IoT project can unveil significant business opportunities. 

But one of the strongest indicators of IoT maturity is the use of analytics. Adding analytics revolutionizes the project. When you analyze IoT data, you get information that can be used to achieve business goals and objectives. So, don't miss out on opportunities on your way.

The issue of security is so obvious to everyone that it’s not even worth being mentioned.

So, let’s make IoT development processes a happy way for incredible results.

 

The Merge – Ethereum is on the edge of grandiose changes!

The whole crypto community holds its breath waiting for the most grandiose event that is about to happen. Ethereum is on the edge of the most significant event in its history. A large-scale update called The Merge is planned for the Ethereum network, which involves changing the consensus algorithm from Proof-of-Work (PoW) to Proof-of-Stake (PoS).

The goal of the upgrade is to make this blockchain platform more scalable, secure and decentralized.

The actual activation of the Merge will happen with the "Paris" update, around September 15th, when the cumulative Terminal Total Difficulty (TTD) reaches 58750000000000000000000. TTD specifies the final Proof of Work block, after which the Proof of Stake consensus takes over.

With the change of consensus the era of mining will finally be closed. After the transition of Ethereum to PoS, miners in the network will be replaced by validators. They will confirm new transactions with the help of stakes and receive a reward in Ethereum coins (ETH) for this work.

To clearly understand what is happening, let's take a closer look at the basic concepts of the blockchain and figure out what consensus algorithms are and what are the pros and cons of PoW and PoS algorithms.

Each blockchain has its own protocol as a set of rules and actions, aimed at transferring data. Protocol is a critical component of Blockchain technologies that allow interaction of network nodes, transmission of data and block mining confirmation. A node is one of the many devices that runs the blockchain protocol software and usually stores a history of transactions. Nodes are connected to each other in a decentralized network. 

The consensus algorithm ensures that the rules of the protocol are followed and that all transactions are authentic. In other words, it is responsible for ensuring that all the nodes of the network agree with the adding of a new block. In this way, the consensus algorithm maintains the integrity and security of the network.

Proof-of-Work and Proof-of-Stake are currently the most used and well-known consensus algorithms. In fact, there are many more consensus algorithms, but for now we will consider only these two.

 

Proof-of-work (PoW)  is used widely in cryptocurrency mining, for validating transactions and mining new tokens. It is a mechanism that allows the decentralized Ethereum network to come to a consensus or agree on things like account balance or the order of transactions. This prevents users from “double spending” their coins and ensures that the Ethereum network is extremely difficult to hack or to fake.

To participate in a transaction, network members need to solve an arbitrary mathematical puzzle to find the hash and publicly prove the work done in order to avoid cheating the system.

A hash function is a function that converts an array of input data of arbitrary length into a bit string of a fixed length, performed by a certain algorithm. The conversion performed by the hash function is called hashing. The result of the conversion is called a hash.  The hash calculation process requires a lot of energy, which only increases as more miners join the network.

The first miner who is lucky to find a solution gets the right to add a block to the chain. Moreover, it gives the  ability to receive a reward for the work done and this is the main motivation for mining. All nodes are competing with each other, increasing the capacity of computing resources in order to be the very first node to receive a reward.

The main disadvantages of PoW:

  • Mining requires an enormous amount of energy. Nodes in the network are competing with each other, constantly performing complex calculations. But as a result, most of the work is done for nothing, since the reward goes to only one node. Bitcoin mining consumes more energy than countries like Switzerland or Greece;
  • Low speed and poor scalability. PoW blockchains are sorely lacking in speed. For example, the maximum throughput of the Bitcoin network is only 7-10 transactions per second. Such low rates are not suitable for mass and everyday use;
  • Users have to pay fees to miners for the verification of transactions. The more users in the network, the higher the commission. For the small transactions, commissions can even exceed the amount of the transfer itself;

 

Proof-of-Stake (PoS)  reduces the amount of computational work required to validate the blocks and transactions that keep the blockchain secure. Computing power (block validation) is replaced by staking. Staking is the process of blocking cryptocurrency assets in order to earn rewards or interest.

This algorithm gives the right to create the next block in the blockchain to the node that has more balance - the amount of resources, for example, cryptocurrency coins. The node does not receive a reward for the creation of the block. The reward is paid for the transaction. 

The main advantages of the PoS algorithm:

  • Low power consumption compared to PoW algorithms;
  • No special equipment needed;
  • High speed and scalability. For example, the transaction amount increases up to 2000 per second;
  • Low commissions;
  • Participation in the evolution of the project. Validators are taking part in voting on the future development of the project;

 

But aside from the fact that Proof-of-Stake is younger and less tested compared to Proof-of-Work, the biggest concern about the PoS algorithm is the risk of centralization. The validators, who have the larger amount of coins, will eventually control the majority of the network. Therefore, blockchain developers have been working on new versions of the PoS algorithm in recent years to solve this issue.

So, what will happen when the cumulative difficulty of Ethereum mining  finally exceeds the assigned TTD value? After crossing this milestone, there will be no more mining here. Network users (wallets) will stop accepting blocks from miners and will be waiting to get them from PoS validators.

The updated version of the protocol after the transition is called “Paris” and will continue the line of European capitals after: “Berlin” and “London”. On the evening of September 11, almost 84% of wallets were ready for the transition.

To become an Ethereum validator, you need to have at least 32 ETH as a deposit. In order to optimize the calculations, staking participants are divided into committees -  groups, the members of which are determined randomly. They include from 128 to 2048 validators. 

Time in Proof-of-Stake Ethereum is divided into slots (12 seconds) and epochs (32 slots). A randomly selected validator proposes blocks in each slot. This validator is responsible for creating a new block and sending it to nodes in the network. A committee of validators votes to agree on the validity of the block that was proposed. Committee members are shuffled after each epoch. 

By June 2022, the energy consumption of the Ethereum blockchain was 112 TWh per year. As a result of replacing mining with staking, this amount will decrease by 99.95%. This will not affect the operational processes of the protocol and the economics of projects, but it will allow Ethereum to avoid criticism from the “greens”. In addition, Ethereum will become more attractive to investors who take into account environmental issues. The developers claim that after PoS implementation, each node will require no more electricity than a regular PC.

The Ethereum roadmap includes the implementation of a technology called sharding, which is necessary to increase the scalability of the blockchain. Sharding is the division of a common database into fragments and distributed storage of information by nodes. This update will allow the Ethereum network to grow in line with the increasing load.

Sharding will reduce hardware requirements and allow the node to run on laptops and smartphones. The update is planned to be integrated in 2023, but the final date depends on the effectiveness of the Merge - transition to Proof-of-Stake.

There is no doubt that the Merger is one of the most significant events in the history of cryptocurrencies, which may have far-reaching consequences, from unpredictable fluctuations in the price of Ether to global changes in the crypto industry at all.

 

Are you ready for the Metaverse?

Introduction

The metaverse is an online world, a virtual space where the digital version of you performs virtual activities similar to those performed in the real world, such as communicating, building, owning digital real estate, using digital currency that can be converted into real transactions.

Matthew Ball - modern ideologist of the metaverse, identifies its seven main features:

  • Endless existence - it never stops, resets or ends;
  • An unlimited number of audiences can connect to it simultaneously. Anyone can connect to the metaverse at any time and participate in its life on an equal basis;
  • A functioning economic infrastructure and reward system for virtual work that brings value recognized by others. Income or earnings that can be spent and invested;
  • Compatibility of all data from different digital worlds;
  • Independence from external factors - it exists in real time, although developers can create and schedule events in the metaverse;
  • Bridges the digital and physical worlds, private and public networks, open and closed platformes;
  • Filled with content created by variety of individual contributors or groups of users;

Though currently existing gaming platforms are the best examples of the metaverse, there are also a huge number of opportunities for the businesses. 

Today, many companies have integrated VR technology into their manufacturing processes. Ford, for example, uses virtual reality to allow employees from different countries to work on car design simultaneously. VR helps make processes faster and cheaper, and does not require physical materials. 

VR technologies can be useful not only in production, but also in office work. Imagine being able to work remotely but still hold meetings and negotiations with colleagues and business partners in a virtual space? Already existing platforms glue.work or Mesh, which Microsoft introduced in November 2021, allow several people to communicate in one virtual space. 

The participants can interact with each other and even with 3D objects. So far, this is possible with digital avatars, but later Microsoft plans to develop technologies that will allow people to appear in a virtual environment through their own holograms, which will help to express the real emotions and communicate better.

Blockchain is also an extremely important part of the metaverse world. It is the economic mechanism of the future. Blockchain will tie each user's data and money to their digital account and allow them to use purchased products throughout the metaverse.

The rapid development of the metaverse will bring benefits in many areas of our lives - education, entertainment, the arts, health care, and others. It opens up important business opportunities, even more impressive than after the digital transformation that has turned most companies into online businesses in recent few years.

The metaverse is creating a new economy that can significantly increase brand awareness, deliver immersive customer experiences, and improve communications. 

The fashion industry is already using these tools. Fashion shows where the model walks not on the catwalk, but in the virtual world, have already taken place.

Did you know that sports brand Nike has its own metaverse - Nikeland, where you can style your avatar buying exclusive digital shoes, clothes and accessories? Adidas has already released a mixed collection for both the real and the digital world. The items of Adidas “Into the Metaverse” collection were sold as NFTs.

Celebrities are also starting to dabble in the NFT world. One example is Quentin Tarantino, who decided to sell part of the Pulp Fiction movie-script, which was not included in the final film, as NFT.

Digital giants like Google and Microsoft are now investing heavily in creating their own metaverses, and Facebook has recently been renamed to Meta. According to Zuckerberg, users of the new metaverse will not be tied to any particular social network any more. People will be able not only to view the content, but literally to be in it.

The ultimate goal of the metaverse as a product is to recreate the real world with all the feelings and processes. Companies want people to be able to "live" in the metaverse - to hold meetings there, watch shows, play games and make new friends. 

However, besides the fans of the metaverse, there are those who believe that it has a dark side.

 

Privacy concerns

There is no doubt that technologies that already track our behavior and preferences will be used in the metaverse. Moreover, these technologies will become much more aggressive and intense.

By connecting wearable devices, necessary for immersive virtual experiences, we will allow companies to track our physical reactions and emotions. They will collect and use huge amounts of data for marketing and other purposes.

For example, the eye-tracking technology that VR headsets provide will make it possible to collect information about where and for how long we are looking during metaverse experiences. Such issues raise significant concerns for those who care about privacy.

 

Health Concerns

Returning to the real world after an amazing and impressive time spent in virtual reality can cause sadness or even depression. Adults and children already suffer from gaming addiction. As the metaverse expands, more and more people will suffer from this so-called “digital hangover”.

 

Legislation issues

Is it possible to control everything that happens in a vast universe comparable to the real world? The first allegations of sex parties, harassment and meetings of extremist organizations in the metaverse have already surfaced. We must be prepared for an increase in such cases. 

Second, who and how will evaluate the ethical and moral components of the content? Can a virtual act be a crime? Who will establish the rules for what is allowed and what is forbidden? The metaverse will cause regulatory problems and will create new blind spots in legislation.

An issue that can rather be attributed to the challenges that the metaverse is already facing on the way to its expansion is facilitating access to virtual reality technologies for the general public. One of the reasons is the high price of VR headsets. Now they are becoming more affordable and easier to use and you no longer need a powerful computer to immerse yourself in VR. 

Beyond that, a technological breakthrough regarding the headsets should be expected. So far, they are quite massive and can hardly be called mobile. Probably portable and convenient glasses that allow you to enter the metaworld from anywhere, will be produced soon. Three major new releases coming this year: the Sony PlayStation VR2, a Meta headset "Project Cambria" and Apple's AR glasses, which will most likely be similar to smart watches and will display notifications, calls, and information in augmented reality.

Technologies aim to enable full immersion in the virtual world, where objects can be touched and felt. South Korean company bHapticshas presented gloves that will not only reproduce your movements in VR, but will also allow you to feel objects. Sensitivity is provided by a system of inflated or deflated pads that apply pressure to different parts of the hand, simulating the sense of touch. bHabtics also has a special suit that converts the sound effects of explosions, shots, etc. into tactile feedback using special built-in motors. In the scary game Phasmophobia, for example, you can already feel the touch of ghosts.

Why is such a hype around the metaverse happening just now, though for the first time this term was mentioned back in 1992 by a science fiction writer Neil Stevenson in his  novel Snow Crash?  

Perhaps it was influenced by Covid-19 pandemic, when people were isolated for a long time: public events like concerts, visits to museums, theaters, cinemas and clubs were under restrictions. However, the human nature to interact, as well as the need for self-expression and recognition, have significantly increased the value of digital presence on virtual platforms.

Since the metaverse includes all the main components of the new stage of Internet development, such as NFT, blockchain, cryptocurrencies and DeFi, there is an opinion that Web-3 is the Metaverse. 

Based on research, Gartner predicts that people will spend at least an hour a day in the metaverse by 2025.

Are you already metaverse-ready?

 

 

Inmost continues receiving new 5-star Reviews on Clutch for promising App Development from Dyop

Introduction 

I suppose we all can admit that vision is one of the most fundamental senses and a powerful means of interfacing with the world. This is the main reason why we have to draw our attention to every deformity or change in it. Early detection of persistent eye-related diseases is critical in preventing the further deterioration of vision and nurturing overall eye-health.

In addition, such a vital topic connected to our healthcare ought to be implemented into the era of digitalization smoothly but effectively. So what is the solution? 

The mobile apps market is developing almost every minute. Switching to the digital becomes a necessity which encourages more and more people to get used to a new reality which is more comfortable and rapid. Current research of the US digital market  claims about impetuous growth in healthcare scope during 2022 which will lead to a common tendency all over the world.

This is the reason why the growing demand for mobile apps in such a highly responsible niche as healthcare becomes extremely complex. You are literally in an urge to find well-grounded developers who will provide the patients with trust, quality and comfort. 

Fortunately, Inmost stands exactly for that kind of software provider. We’re proud to welcome back our clients with new projects and - are happy to continue cooperation for many years.

 

About Dyop Vision Associate 

Let’s dive into the history of Dyop Vision Associate. This company is located in Atlanta, Georgia, headed by Alan Hytowitz. It has spent 14 years creating Dyop Test, which is used to measure visual acuity.
Moreover, the Dyop is considered as a revolutionary way to measure 21st century vision. And we’ll explain to you why. 

As a vision scientist, Allan discovered that the 1862 letter-based tests used globally by optometrists and ophthalmologists are actually making people blinder in the 21st century. It literally makes eyeglass lenses exceedingly powerful, contributing to the major cause of the global epidemic of myopia.

In simple terms, Dyop, a short for dynamic optotype, is a spinning segmented ring used as a visual target (optotype). That kinetic retinal stimulus is essential for vision as it helps to avoid depletion of the photoreceptor response and can be used for measuring acuity and refractions based on the Dyop angular arc width (diameter).

Originally, the client had a working version of the eye test algorithm in Windows. However, he would like to build a Dyop application that can run on any OS with any browser on any device. The core goal is to allow a doctor to login to the system and test a patient’s eyes remotely, or patients to have their eyes checked remotely and then, if problems are identified, book an appointment with the closest doctor for an actual physical eye test.
Starting from this point, Inmost was ready to help with implementation in this bold challenge. 

 

What we delivered 

As a  software development partner, we decided to implement PWA technology. WebSocket protocol to establish real-time connection between the doctor’s and the patient’s devices.

Dyop app was designed considering all the risks and insecurities connected with clients. As we’ve already mentioned, healthcare digital platforms have the main task - to supply people with confidence and trust. We provide our clients with mobile and web versions which help them easily to check their vision and get feedback. The app is both relevant for clients and doctors as well. 

The process of eye testing is very simple. The patient looks with each eye separately at the rotating rings and should answer which ring is rotating: left or right, or it is not clear. There are 3 main buttons for these options to control the process.

 

Key development outcomes

Here are a few key features delivered by Inmost. We’ve implemented 3 types of eye testing:

  • The doctor’s  control of the test sitting in the same room with the patient.
  • Doctor’s control of the test remotely.
  • Patient’s self-test. 

 

What did client say

We’d like to share our excitement with you about a 5-star review on Clutch - a B2B ratings and reviews platform. 

 
Description of imgage.
 

 

Alan Hytowitz was enthusiastic about working with us and shared his impression of work describing every detail that made him confident in cooperating with Inmost. 

“They've been very helpful in terms of logistics and planning, what needs to be done. Inmost seems to be extremely competent, pleasant seem to have understood exactly what we needed. And even though I'm not really the guy who is the software person, I'm more the mad scientist vision test. I really like the folks at the Inmost.

To summarize, we’ve been extremely captivated to produce such an innovative project and participate in the improvement of the healthcare sector. Inmost is looking forward to starting new collaborations to make our future better. 

To find out more about our development process, check out the full case study on Clutch. 

A brief overview of communication platforms development – from 1988 till now

Introduction

Communication platforms have come a long way turning into the apps we can’t do without any longer, such as WhatsApp or Skype. Now, it's hard to imagine chatting  without an opportunity to send a GIF or a voice message. However, these features were not even conceived at the dawn of first text terminals’ development.

Infancy

Originally, the Internet was designed to exchange information. Communication platforms' features included sharing text messages and files, and special protocols and programs were developed to provide users with both of these functions. Thus, one of the earliest real-time messaging protocols and a corresponding program were called Internet Relay Chat (IRC). IRC existed far back in 1988 and ran on a text terminal.

Internet Relay Chat

The ICQ era

After the Graphical User Interface (GUI) took over the world in 1996, four high school students from Tel Aviv created the “first edition” of a messenger, known today as the ICQ program (in consonance with “I seek you” catchword).

ICQ chat

The ICQ client had a graphical interface and could send files, besides the text messages, and display graphical content in a scalable configuration in the program window.

ICQ was followed by plenty of replicas (inspired by its success), many of them attempted to register their client in the system. Since the ICQ protocol was never published, other companies simply tried to “hack” it.  Therefore,  the ICQ developers in order to protect their messenger, frequently changed the protocol for interacting with the server, temporarily setting all the third-party clients out of service.

At its peak, ICQ accounted for about 20 million users worldwide - an insane amount for that time! Such success motivated market leaders like Microsoft, AOL, and Yahoo to develop similar programs. And this whole software class got its name - Instant Messaging (IM).

Metamorphosis

In 1999, the very first attempt to standardize protocol was made. The Jabber - open-source alternative to ICQ appeared. It had a function of sending messages and files using its own protocol, had client parts for Linux and Windows, and - the most important thing – had gateways to all popular messengers of that time, including ICQ.

In 2004, the XMPP protocol (“eXtensible Messaging and Presence Protocol”), based on Jabber, was created.

XMPP architecture scheme

Although XMPP was proclaimed to be a standard for messenger development, it was too knotty. It became even more awkward when a need to “squeeze” new features into the protocol architecture appeared. Now, there are several open-source implementations of XMPP servers and clients.

Skype age

In 2003 the market was blown up by Skype. It was a new IM application that had a “killer feature” – voice communication. The ability to call on a regular phone number appeared almost immediately – in version 1.2. It was a true revolution. In 2010, developers added an option for group video conferencing. The number of users grew very quickly, and by the end of the year, Skype accounted for about 667 million users.

Present time

Since 2010 many IM programs have been introduced to the general public. Between 2011 and 2014, IM programs appeared to be among the most popular ideas for startups. However, it became clear that the development of scalable applications able to run 24/ 7 was a very complicated task.

Enthusiasm is necessary but yet not enough to create an appropriate program. Firstly, the development process requires a tremendous amount of resources for quality assurance, usually not affordable for small companies. Secondly, the “era” of deployment applications for servers in data centers with “real” hardware has passed. Modern messengers have to serve millions of users, and no startup can afford to keep this amount of hardware in reserve.

Today, having a smartphone is almost a must for everyone. This fact led software developers to come up with a simple idea of linking IM-application accounts to phone numbers and to using them for authorization purposes. This idea has been implemented in all modern messengers, such as Telegram, Viber, and WhatsApp. Though they have desktop client applications, you can’t sign up without a phone number.

Telegram has implemented another brilliant idea - channels. A channel is a one-direction communication. The channel owner can post messages, while other users (subscribers) have read-only option, with no possibility to reply or delete messages. However, recent versions of Telegram have allowed subscribers to leave comments on posts.

That's all for now about the messengers’ history.  For sure, it will continue, and we’ll see even more cool features. Some of them are even hard to imagine. And once they appear, Inmost will write about them in detail.

Inmost chooses the development of software solutions based on Amazon Chime as the main business growth trajectory

Introduction

The modern information technology outsourcing market provides a wide range of opportunities for all participants. We - Inmost team - value software development trends and, at the same time, are looking for the ways to apply them for our customers` benefit and business success.

With regard to all of the successfully delivered projects, so far, we’ve decided to choose several specific expertise fields as potential growth points for the company. This approach allows us to focus operational efforts, including further training and certification of employees, on proficiency growth in the selected areas to meet our clients' needs even better.

One of the core Inmost expertise is CPaaS platforms (Communication Platform as a Service), and in particular - Amazon Chime communications service.

Why Communication Platforms as a Service?

We believe communication platforms are already and will keep being one of the most demanded types of Information Technology Services. According to International Data Corporation (IDC) forecasts, the global communication platforms market was expected to reach $4.56 billion in 2019 and grow 39% - by 2023.

Moreover, the estimates were made before the world economy faced changes due to the COVID pandemic. In 2019 no one could have predicted the increasing number of remote employees during 2020-2021. And, therefore, no one could have predicted the enormous growth of demand for high-quality video conferencing and other interaction tools, e.g. instant desktop sharing.

In May 2021 IDC adjusted its original estimate. Now, experts predict that the market will cost $92 billion by 2023. The recent market overview published on nojitter.com proves that the communication platforms market is snowballing so all the estimations of its size don't make much sense.

Inmost successfully provides projects for customers with the implementation of different communication platforms. And AWS Chime has impressively improved the development process and business outcomes. 

We’ve chosen Amazon Chime as a solution for communication platforms as Amazon is a leading provider of cloud technologies. This global tech giant invests immense resources in development, debugging, and testing of its solutions. With Amazon’s platform, we can stay focused on customers’ needs instead of overcoming already solved problems.

Amazon gained trust and loyalty all over the globe. At the moment, Amazon Global Infrastructure covers 25 Regions, each having several independent zones (so-called “Availability Zones”). As a result, it provides reliable responses to requests quite instantly.

Also, Amazon has so-called “Wavelength Zones”. These are AWS infrastructure deployments, where AWS storage and compute services are integrated into 5G providers’ networks. No alternative to it - this is an essential point for mobile applications development.

Amazon launched Chime in 2017. Since then, the service has evolved significantly and numerous new features were added. No doubt, that this secure, reliable, and affordable platform is the way to the future. And Inmost is the right software development company to provide solutions based on Amazon Chime.

Inmost gets a 5-Star Review on Clutch for Ongoing Mobile App Development from Cellar Ventures

Introduction

The mobile apps market has skyrocketed over the last decade. While in 2014 its global revenue amounted to roughly $100 billion, now the number is predicted to be close to $700 billion. As more and more consumers switch to digital, the importance of providing customers with first-class UX becomes more vital for companies than ever.

At the same time, due to the growing demand for mobile apps, the SaaS (Software as a Service) niche becomes extremely overheated by businesses struggling to find reliable developers to establish long-term commercial relationships.

Luckily, Inmost stands exactly for that kind of software provider. We’re proud to welcome back our clients with new projects and - are happy to continue cooperation for many years.

About Cellar Ventures, Inc

Cellar Ventures, Inc is a California-based company that operates as an intermediary agency between wineries and their customers on the global market. The company helps cellar owners to export wines via convenient channels and sell bottles directly to end-buyers all over the world.

In 2020, Cellar contacted Inmost with an idea to create a digital solution that would enhance the global wine trade and make it easier for both producers and consumers to sell wine abroad. The company needed an app that people can refer to and find all information about a particular wine, including year, vineyards, grapes, etc. The solution needed to be perfect towards UI/UX parameters - to compete and take the market share from other apps in the niche.

What we delivered

As a  software development partner, Inmost team took this case and applied a “full-cycle” approach. We were involved  at every stage of product development - from market research, including competitors` Apps overview, to the development and post-release testing.

CELLR app was designed to be a reference point for wine lovers. They can not only search for wine-related information, but purchase bottles as well as some exclusive wines from private collections.

The App’s users can create wish lists, receive recommendations, and even trade with other cellar owners. Wine sellers, on the other hand, can use App to take inventory of their cellars' stocks and set prices for bottles.

Key development outcomes

We’ve used an advanced tech stack to make CELLR App convenient and user-friendly. Here are just a few key features delivered by Inmost:

  • an option to recognize any wine label
  • a possibility to conveniently buy a bottle of wine or put it up for sale via the smartphone
  • saving option of selected wines to users profiles
  • convenient search & filter functions
  • a full database of wines.

What did the Client say?

We at Inmost were incredibly excited and proud to receive our first 5-star review on Clutch - a B2B ratings and reviews platform.

 
Description of imgage.

 

In his conversation with Clutch, Jeffrey Ishmael, CEO of Cellar Ventures, shared the details of our ongoing collaboration. He explained that his company needed a mobile app for their business and told about the outstanding results that were achieved in cooperation with the Inmost team.

“We set out to work with Inmost from the beginning, to develop a mobile app for our new wine business, because we want to connect consumers directly to consumer wine sales… From a metrics perspective, we’ve been working together quite efficiently.

Where the company has been effective is in the stability of their own team. We haven’t had a lot of turnover on most of the staff that’s been working on the project, because Inmost does a lot to recruit, train, and motivate their team.

They’ve been very complete with their execution and delivery.”

To find out more about our development process, check out the full case study on Clutch.