Why can the Response Time of your application be crucial for your business?

What is the response time?

Applications, software, and websites receive requests from users, and the time it takes to respond to their interactions can have a dramatic impact on application efficiency and user satisfaction. Response time is the time interval that begins when a user clicks a button on a web page and lasts until the server returns the complete data.

There are numerous reasons why an application may be slow to respond to the requests, such as high website traffic - too many concurrent requests, memory and compute resource leaks, slow database queries, limited network bandwidth, or even poor application logic.

It is essential to know and monitor the factors that cause the limited performance of your application. This is the first and most crucial step to reducing response times and improving overall performance.


Why is it so important?

If response time is essential when dealing with in-house software that ensures the productivity of work processes, then it becomes a really crucial factor in e-commerce applications. Slow web page response time can discourage visitors from accessing the site and switching to a competitor's site. This immediately deprives the company of a potential customer and makes the site less competitive. When users leave it, the bounce rate of the page increases, which negatively affects search engine rankings.

Customer satisfaction surveys have consistently shown a positive correlation between faster response times and higher customer satisfaction. According to Forrester Research, 77% of customers believe that the most important thing a company can do to provide them with good service is to value their time.


What is considered to be a good response time?

According to Jakob Nielsen - web usability researcher and consultant, there are three main time limitations which are determined by the abilities of human perception that should be kept in mind when optimizing the performance of websites and applications.

0.1 second: gives a feeling of immediate response - which provides the impression that the user controls the outcome-receiving process, not the computer. In his article "Need for speed", Dr Nielsen mentions that research on a wide variety of hypertext systems has shown that users need response times of less than one second when moving from one page. 

1.0 second: the user's flow of thought is not interrupted, although the delay is already noticeable. So the users know that the computer is generating a result, but they still feel that they are in control of the overall experience and are moving freely rather than waiting for the computer to do so. This level of responsiveness is essential for good navigation.

10 seconds: from 1 to 10 seconds, users feel dependent on the computer and would like it to run faster. After 10 seconds, they start focusing on other things, which makes it challenging to get their attention back in the right direction when the computer finally responds.

A 10-second delay often forces users to leave the site immediately. And even if they stay, the possibility of successfully completing the user's tasks is significantly reduced.


Digital giants experiments

This means a response time of more than 1 second is problematic and needs improvement. The higher the response time, the more likely users will leave your website or application. 

Experiments by major digital giants confirm that even small changes in response time can cause serious consequences.

Google found that switching from loading a page with 10 results in 0.4 seconds to a page with 30 results loading in 0.9 seconds reduced traffic and advertising revenue by 20%. Reducing the homepage of Google Maps from 100KB to 70-80KB resulted in a 10% increase in traffic in the first week and an additional 25%  in the following three weeks. 

Amazon Tests revealed similar results: every 100 ms increase in load time of Amazon.com decreased sales by 1%.

Microsoft Live Search experiments showed that when search results pages were slowed down by 1 second, the number of queries per user decreased by 1.0%, and the number of advertisement clicks per user decreased by 1.5%.

After a 2-second page slowdown: Requests per user decreased by 2.5%, and advertisement clicks per user decreased by 4.4%.


What Metrics are used while measuring Response Time?

Let's look at six of the most important metrics to watch and the value they provide.

Response Metrics

  • Average response time is the average time taken per every round-trip request. Average response time includes loading time for HTML, CSS, XML, images, JavaScript files, etc. Therefore, the presence of slow components in the system affects the average response time.
  • Peak Response response time helps to find potentially problematic components. This helps us to find any issues with the website or the system when a particular request is not handled correctly. 
  • The Error rate is a mathematical calculation that shows the percentage of problem requests compared to all requests. This percentage considers all HTTP status codes displaying an error on the server. It also calculates time-out requests.

Volume Metrics

  • Concurrent users measure how many virtual users are active at any given time. Although this is similar to the number of requests per second, the difference is that each concurrent user can generate a large number of requests.
  • Requests per second measures the number of requests that are sent to the server every second, including requests for HTML pages, CSS style sheets, XML documents, JavaScript files, images, and other resources.
  • Throughput measures the amount of bandwidth in kilobytes per second consumed during a test. Low throughput may indicate the need to compress resources.


Response time - one of the crucial business-factors

Users don't want to know the reasons behind the delays. They just realize they're getting poor service and are annoyed by it. Even a few seconds of delay is enough to create an unsatisfactory experience for the user. So, with repeated short delays, users will abandon the task and look for an opportunity to do it elsewhere. You can lose part of the sales simply because your site or application is too slow.

Leverage the efficient and reliable solution development from Inmost team, ensuring application resilience even in case of sudden and significant high loads. Remember that when choosing between several software applications, assuming that they are all equally reliable, users will always choose the fastest one.


LoRaWAN leading technology in the LPWA space

Last week IoT Solutions World Congress was held in Barcelona, and the booth with the most quantity of IoT devices was the stand with LoRaWAN devices. It's imposing. Let's know why LoRaWAN is so famous for IoT.


IoT Glossary Definition

LoRaWAN is an abbreviation for Long Range Wide Area Network. It's a type of Low Power Wide Area Network (LPWAN) that uses open-source technology and transmits over unlicensed frequency bands. LoRaWAN was designed for the Internet of Things (IoT) technology and provided a far more extended range than Wi-Fi or Bluetooth connections. It works well indoors and is especially valuable for applications in remote areas where cellular networks have poor coverage.


The difference between LoRa and LoRaWAN

It's not uncommon to hear LoRa and LoRaWAN used interchangeably, but they're two different things.

LoRa (Long Range) is an LPWAN protocol that defines the physical layer of a network. It's a proprietary technology owned by Semtech (a chip manufacturer) that uses Chirp Spread Spectrum to convert Radio Frequencies into bits so they can be transported through a network. LoRa is one of the technologies that make LoRaWAN possible, but it's not limited to LoRaWAN, and it's not the same thing.

LoRaWAN (Long Range Wide Area Network) is an upper-layer protocol that defines the network's communication and architecture. More specifically, it's a Medium Access Control (MAC) layer protocol with some Network Layer components. It uses LoRa, explicitly referring to the network and how data transmissions travel through it.


The main characteristics

LoRaWAN has two key characteristics that make the technology particularly suitable for specific IoT markets. 

Firstly, it is an LPWA technology meaning that LoRaWAN-connected devices can be battery-powered with battery lives of potentially several years. Also, LoRaWAN networks have the potential to be deployed as wide-area public networks, much as cellular networks are currently deployed today. 

Secondly, LoRaWAN operates in the licence-exempt spectrum, meaning that an end-user or network provider does not need to procure radio spectrum before deploying a network. These characteristics make for cheap and easy network deployment to provide connectivity for battery-powered sensing or actuating devices that can potentially operate for years with minimal maintenance requirements. The trade-off for this flexibility lies in LoRaWAN's limited data rates, which are much lower than today's cellular technologies but are often perfectly adequate for IoT devices. 


LoRaWAN Classes A, B, & C

LoRaWAN has three classes that operate simultaneously. 

Class A is purely asynchronous, which we call a pure ALOHA system. This means the end nodes don't wait for a particular time to speak to the gateway—they simply transmit whenever they need to and lie dormant until then. If you have a perfectly coordinated system over eight channels, you could fill every time slot with a message. As soon as one node completes its transmission, another starts immediately. Without any communication gaps, the theoretical maximum capacity of a pure aloha network is about 18.4% of this maximum. This capacity is due to collisions. Two nodes will collide if they transmit at the same frequency channel with the same radio settings.

Class B systems work with battery-powered nodes. Every 128 seconds, the gateway transmits a beacon. (See the time slots across the top of the diagram.) All LoRaWAN base stations simultaneously transmit beacon messages at one pulse-per-second (1PPS). Every GPS satellite in orbit transmits a message at the beginning of every second, allowing time to be synchronized worldwide. All Class B nodes are assigned a time slot within the 128-second cycle and are told when to listen. You can, for instance, tell a node to listen to every tenth-time slot, and when this comes up, it allows for a downlink message to be transmitted (see above diagram).

Class C allows nodes to listen constantly and send downlink messages anytime. This system is used primarily for AC-powered applications because it takes a lot of energy to keep a node actively running.


Where to use

LoRaWAN networks have been deployed as wide-area private networks, notably to support applications such as smart metering and public space lighting, including street lighting. Deployment of networks for street lighting, in particular, can unlock new opportunities for smart streets.

Some companies have ambitious plans to deploy LoRaWAN as a wide-area public network technology that is rapidly gaining momentum. In this context, it is worth calling out three companies: Everynet, Helium, and Senet. Recently, Everynet has pursued a strategy to roll out such networks, starting in Brazil and following with the USA and Indonesia. The company's networks cover more than 50% of the population of Brazil and more than 40% of the population of the USA, and Everynet will enhance this baseline coverage according to customer demand. The following priorities include several larger European countries.

Meanwhile, Helium claims to offer the largest LoRaWAN network in the world. Hotspots or access points can be deployed by any individual or business and offer coverage as part of the Helium network in return for payment, enabled and administered using distributed ledger - Blockchain - technology. Currently, the Helium network is comprised of around 850,000 LoRaWAN hotspots. Senet positions itself as a carrier-grade network provider and has a two-way roaming agreement with Helium. Senet itself, in September 2022, announced that it had expanded the build-out of its public LoRaWAN network across all five boroughs of New York City.


Forecast for LoRaWAN applying

According to the forecast, by 2030, there will be 6.9 billion wide-area wireless IoT connections, of which 36% will be traditional cellular technologies, while 4.4 billion will be LPWA technologies.

Utilizing the power of LoRaWAN can solve a mix of connectivity challenges for things such as sensors and metering across industries, including smart cities, fleets, automotive, agriculture and industrial.

LoRaWAN is ideally suited for deployment as a campus area network in agricultural contexts, in support of devices ranging from soil-moisture sensors to temperature sensors in greenhouses and from storage tank level monitoring to enabling remotely controlled irrigation systems. In other enterprise contexts, the technology is well-suited to monitor various assets' location and condition, enabling building automation solutions and many other applications.

One of the key scenarios includes deploying networks to support inventory management and monitoring, including stock level monitoring and warehouse management systems which can reduce the load on warehouse employees, freeing them up for other higher-skilled tasks. 

Significant benefits can be gained from monitoring chillers and refrigerators in retail, hospitality, medical and warehouse contexts. In all these cases, a simple LoRaWAN temperature sensor connected to a private network can provide regular temperature readings and help ensure that refrigeration units are maintaining correct temperatures, reducing spoilage and waste.



LoRaWAN is fine if you want to build on carrier-owned and operated public networks. Service providers like to compete in this space, so many choices exist. And for simple applications, where you don't have a lot of nodes and don't need a lot of acknowledgement, LoRaWAN works. But if your needs are more complex, you will inevitably hit serious roadblocks. Many LoRaWAN users have not experienced those roadblocks because their networks are still relatively small. Try using LoRaWAN to operate a public network with thousands of users doing different things, and the difficulties will most certainly skyrocket.

Also, developing and deploying a system around LoRaWAN is a complex process. It is an excellent misapprehension to think that LoRaWAN "works out of the box" like some Wi-Fi or cellular modems might. You will want to be sure you understand all the architecture and have a good grasp of how the system works before you decide it's the best route for you.



Symphony Link is an alternative LoRa protocol stack developed by Link Labs. To address the limitations of LoRaWAN and provide the advanced functionality that most organizations need, we built our software on top of Semtech's chips.

Let's discuss it next time.


Cellr Vuforia


Briefly about the application

Inmost and CELLR aspired to create a proprietary application for producers and consumers of wines where they could easily add new wines without relying on a competitor's licensed product. Consequently, the CELLR app was developed as a mobile and web version. The application was created for producers and consumers of wine who would participate as members of a more extensive wine community.

Inmost has developed both a mobile and web version of the CELLR app.

It allows you to get accurate information about the desired wine: Producer, Vintage, Country of release, Varietal, market pricing, and consumer reviews.

The app bonds a community where members can share feedback and trade. Inmost and CELLR have developed a trading system where users can make transactions conveniently and safely with wine authentication.

The user can create his collection, store it in a user's cellar, and keep their records, noting when the wine was bought or drunk.

You can also import your inventory from other services or export it as a backup. Among other features, we have implemented convenient filtering and searching by Producer, Varietal, Country, Region, Appellation, Vintage, and Price. 

One of the advantageous features of the application is the search for wine by label photo.

Check this link for a more detailed description: https://inmost.pro/cases/peer-to-peer-trading-platform-that-directly-connects-wine-enthusiasts-and-cellar-owners.


Wine search by label photo

The user needs to press the search by photo button, point the camera at the label, and take a photo. The application will find the wine and show complete information about it. If there are multiple matching results, they will be sorted in descending order of relevance.

During the development of this functionality, we considered two possible concepts:

  • Concept 1. We could recognize the text in the photo of the label and perform a full-text search in a PostgreSQL database.
  • Concept 2. We could use one of the Machine Learning technologies like the Vuforia Engine.


How we evaluated recognition algorithms

To assess the quality of the recognition algorithm, we used a statistical method (check https://en.wikipedia.org/wiki/Type_I_and_type_II_errors).

We created a test set of 73 photos from the most popular manufacturers. We included an equal number of photos of labels with no text, medium, and a lot of text in the test set.

Next, we made up a JSON file that contained metadata for each label and a link to a photo in an AWS S3 bucket

For automatic testing, we needed a script that would iterate through all the photos in a folder and make requests to our Backend server. The server will use two search algorithms. In the first case, PostgreSQL full-text search. In the second case, it will use the Vuforia Engine. After each server response, the script will complete the JSON file with search results. At the end of the script, the script will count the number of correct and incorrect search results for the label photo.



The server can return one of three search results:

  • the server replied that the wine was found, and its result matched the correct answer;
  • the error of the first type: the server replied that the wine was not found, but it was in our database;
  • the error of the second type: the server replied that the wine was found but made a mistake and specified another wine.


With the help of the script, we could test each of the two algorithms in just a few minutes and get the number of correct answers and the number of errors of the first and second types for each. By changing algorithm configuration parameters, such as weights (explanation below) for full-text search and photo quality for Vuforia Engine, we evaluated the algorithm again and again. In the end, we found the optimal configuration with the maximum number of correct answers and the minimum number of errors.


PostgreSQL Full Text Search Algorithm

We implemented this functionality on the server to test the first concept. The user takes a photo of the label in the mobile application and sends a request to the server. Next, the server will convert the image to Base64 encoding and access the Google Cloud Vision service (Cloud Vision API) to extract text from the photo. The response comes in the form of a string (line) of text that contains all the text that could be recognized on the label. The server uses this text for full-text searches in the PostgreSQL database. See details about full-text searches here: 


Let's imagine that information about wine is contained in a table, and each column contains the name of the wine, the name of the manufacturer, the grape variety, the year of release, the country, the region, and the capacity of the bottle. For full-text search in a PostgreSQL database, we will create a text vector and assign a weight to each column. Possible values of the weight coefficients are D=0.1, C=0.2, B=0.4, A=1.

Empirically, we have selected the best weights for our sample: wine_producer=1.0; wine_name=1.0; country, region, subregion=0.4; variable=0.2; bottle_size=0.1.

After evaluating this algorithm, we obtained the following statistics:

Table A.1 - Number of errors using PostgreSQL full-text search

Item Total
Number of images 73
Correct results 51
Type 1 errors 16
Type 2 errors 6


If this algorithm is slightly complicated, it can be crucially improved. We noticed that the critical search parameter is the wine producer. If the manufacturer is found correctly, then the search by other parameters can be significantly narrowed, and the number of manufacturers is small. If the manufacturer were determined incorrectly, the search by other parameters would give a deliberately false result, which we can see by the number of type 2 errors.

The improved algorithm should search for two iterations. First, find the manufacturer, and only in the case when it was found, correctly perform the second iteration of the search among the list of wines from this manufacturer.

However, we did not complicate the search algorithm and decided to test the second concept using the Vuforia Engine machine learning model. 


Algorithm using Vuforia Engine

Information about Vuforia Engine you can view here: https://library.vuforia.com/objects/image-targets.

We created a Vuforia account and used a ready-made test set of photos. Vuforia allows you to easily integrate your engine into the Unity platform using the SDK (check https://library.vuforia.com/getting-started/getting-started-vuforia-engine-unity).

We made up a Unity project, connected the SDK, and used the webcam. We loaded target photos of labels using varying quality into Vuforia Engine and showed actual bottles with labels in front of a webcam. We found that the target photo should have good contrast and should be taken in good lighting. The same photograph of the label in different quality gives very different results.

Vuforia assigns a conditional rating to the photo (in points from 0 to 5): 


  • Pictures with a rating of 2 points and below could be recognized better. It takes a long time to show the label to the camera before it can be recognized. Photos with this rating are not suitable for the application;
  • Photos with a rating of 3 points were recognized well;
  • Photos with a rating of 4 and 5 points were recognized ideally as soon as the label hit the camera lens;
  • Contrast photos performed much better, and monochrome photos performed significantly worse;
  • Recognition did not work well if there were spots or dirt on the label, even small ones;
  • If the photo is bent in front of the camera and rotated at different angles, it is recognized well regardless.


We used the Vuforia REST API for automated testing.

(Check https://library.vuforia.com/articles/Solution/How-To-Use-the-Vuforia-Web-Services-API.html).

You need to send a POST request to the Vuforia service and add a Base64-encoded photo to the request body. Vuforia would return an answer if it was found among the target photos. 



Here are some of the examples we’ve tested:


1.jpg => found: {'target_timestamp': 1614817446, 'name': 'Constant_Diamond_Mountain_Vineyard_Cabernet_Franc'}

2.jpg => found: {'target_timestamp': 1614817446, 'name': 'Constant_Diamond_Mountain_Vineyard_Cabernet_Franc}

3.jpg => found: {'target_timestamp': 1614817446, 'name': 'Constant_Diamond_Mountain_Vineyard_Cabernet_Franc}

4.jpg => found: {'target_timestamp': 1614817446, 'name': 'Constant_Diamond_Mountain_Vineyard_Cabernet_Franc}


Photo of a wine label without text:


1.jpg => found: {'target_timestamp': 1615511800, 'name': 'Orin_Swift_Blank_Stare}

2.jpg => not found

3.jpg => found: {'target_timestamp': 1615511800, 'name': 'Orin_Swift_Blank_Stare}

4.jpg => found: {'target_timestamp': 1615511800, 'name': 'Orin_Swift_Blank_Stare'}

5.jpg => found: {'target_timestamp': 1615511800, 'name': 'Orin_Swift_Blank_Stare'}

5.png => found: {'target_timestamp': 1615511800, 'name': 'Orin_Swift_Blank_Stare'}

6.jpg => found: {'target_timestamp': 1615511800, 'name': 'Orin_Swift_Blank_Stare'}

8.jpg => found: {'target_timestamp': 1615511800, 'name': 'Orin_Swift_Blank_Stare'}


1.jpg => not found

2.jpg => found: {'target_timestamp': 1615514547, 'name': 'Morlet_Family_Vineyards'}

3.jpg => found: {'target_timestamp': 1615514547, 'name': 'Morlet_Family_Vineyards'}

4.jpg => found: {'target_timestamp': 1615514547, 'name': 'Morlet_Family_Vineyards'}

5.jpg => found: {'target_timestamp': 1615514547, 'name': 'Morlet_Family_Vineyards'}

6.jpg => found: {'target_timestamp': 1615514547, 'name': 'Morlet_Family_Vineyards'}

7.jpg => found: {'target_timestamp': 1615514547, 'name': 'Morlet_Family_Vineyards'}

8.jpg => found: {'target_timestamp': 1615514547, 'name': 'Morlet_Family_Vineyards'}


1.jpg => found: {'target_timestamp': 1615516540, 'name': 'Chateau_Lascombes_Chevalier_de_lascombes'}

2.jpg => found: {'target_timestamp': 1615516540, 'name': 'Chateau_Lascombes_Chevalier_de_lascombes'}

3.jpg => found: {'target_timestamp': 1615516540, 'name': 'Chateau_Lascombes_Chevalier_de_lascombes'}

4.jpg => found: {'target_timestamp': 1615516540, 'name': 'Chateau_Lascombes_Chevalier_de_lascombes'}

5.jpg => found: {'target_timestamp': 1615516540, 'name': 'Chateau_Lascombes_Chevalier_de_lascombes'}

6.jpg => found: {'target_timestamp': 1615516540, 'name': 'Chateau_Lascombes_Chevalier_de_lascombes'}

7.jpg => found: {'target_timestamp': 1615516540, 'name': 'Chateau_Lascombes_Chevalier_de_lascombes'}

8.JPG => found: {'target_timestamp': 1615516540, 'name': 'Chateau_Lascombes_Chevalier_de_lascombes'}


We’ve evaluated the quality of the search algorithm for a test set of photos:

Table A.2 - Number of errors using Vuforia Engine

Item Total
Number of images 73
Correct results 61
Type 1 errors 12
Type 2 errors 0


We got fewer errors compared to PostgreSQL full-text searches, and they were all type 1 errors, which means we need to improve the quality of the photos. We need photos that Vuforia Engine will assign a rating of 4 to 5. It won’t be easy to find 1-2 million photos of labels in good quality. In one of the following sections, we will describe how we solved this problem.


Integrating Vuforia Engine into a Mobile App

At the development stage, we only opened this functionality to beta users. Now our recognition algorithm looks like this:



We need to have a sufficient database of label photos before many of the search queries are found. In addition, the user can send photos that do not belong to the labels, shooting foreign objects on the camera. When a user makes a photo search request, the wine may or may not be found. We have created two buckets of AWS S3, where we will store separately found and not found photos. We also created two tables: vuforia_noreco_images and vuforia_reco_images. If a label photo is found, we store it in the reco bucket and add an entry to the vuforia_reco_images table. If the photo is not found, we save it to the noreco folder and add an entry to the vuforia_noreco_images table.




Before enabling this search functionality in the Production version of the application, we need to fill our database with label photos. We need about 1-2 million decent-quality photos. This was the next task before integrating it into the mobile app. 


Obtaining a database of label photos

To obtain our database of label photos, we contacted the COLA service (check https://ttbonline.gov/colasonline/publicSearchColasBasic.do).



It is designed to certify products and contains over 2 million wine certificates. Each certificate contains the metadata we need: name, manufacturer, year of issue, the volume of the bottle, and most importantly, a good-quality photo of the label.

We have written a separate service that will download these certificates, parse their content, save the label photos to an AWS S3 bucket, and save the metadata to our database. The COLA service has its classification of product codes. The numbers we are interested in are in the range of 80-89C.



We created a folder in the AWS S3 bucket for each wine code.



After processing each certificate, we will create a separate folder with the name of the wine, in which we will save the JSON file with metadata and JPEG photos of the labels. We will need to adapt this data to our wine model before adding it to our database and then upload the photos to Vuforia as Target Images.



To avoid loading the COLA service with requests, we used the node-cron library (check https://www.npmjs.com/package/node-cron). With its help, our service started once an hour and made 300 requests. Thus, we uploaded 7200 new certificates per day with metadata and photos of labels without making a big load on the COLA service.

We launched two AWS EC2 instances, and each downloaded and parsed certificates. Since the COLA service products are divided by category codes, we quickly divided this work between two servers, dividing the categories equally.

Consequently, each of the two AWS EC2 instances downloaded 300 certificates per hour and parsed them after that. It took us 4.5 months to download all the certificates we needed with label photos - about 2.1 million.