Latencetech Insight API
Introduction
You can install open APIs on your analyzer instance allowing the extraction of real-time measurements and key performance indicators (KPIs) related to Latency, Throughput and Reliability.
The APIs can extract selected data available in the existing visual dashboards such as latency measurement per protocol, aggregated KPIs for network and application latency levels and new network quality qualifiers such as volatility and stability.
The API output data is presented in JSON format for a given pair or QoSAgent-Reflector. The result can then be easily integrated into other external systems.
The A) Connectivity Insight API allows users to ask the analyzer instance for connectivity performance data about a specific QoSAgent using its AgentID. The request aims to capture the latest available data from the last 5 minutes.
The B) Throughput Insight API allows users to ask the analyzer instance for real time throughput measurements using the Lifbe protocol (when configured and activated). Download and Upload results in Mbps and associated Jitter results are retrieved using this API.
The C) Latency Insight API can be used to retrieve the latency measurements for a specific protocol and for a specific agent in a user determined time range set in seconds.
The D) Geoloc Insight API is used to query the latest average latency measured and its geographic coordinates in GPS format (when available and configured). Adding the optional time_range query parameter to the /api/v1/geoloc endpoint, allows users to specify the time period in seconds for which they want to retrieve historical geolocation data.
The E) Twamp Insight API can be used to query latest and detailed latency levels measured using the TWAMP Protocol (RFC 5357) for a given QoSAgent. The latency results are split between forward latency in ms, return latency in ms and processing latency in ms (i.e. time spent within the Reflector).
The F) Forecast Insight API provides the latest forecasted measures using statistical based mean and projection.
The API requires that you open and secure the port 12099 on your analyzer instance.
If not already installed, you will find the instructions to the installation below:
Download
First, download the latencetech_api.yaml
file using the following command:
wget https://api.latence.ca/software/latencetech_api.yaml
The latencetech_api.yaml
is a docker-compose file with the following contents:
version: '2.4'
services:
latencetech_api:
image: registry.latence.ca/software/latencetech_api:latest
container_name: latencetech_api
ports:
- "12099:12099"
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
- NODE_ENV=production
restart: always
Launch
Launch the LatenceTech Insight API using docker-compose:
docker-compose -f latencetech_api.yaml pull
docker-compose -f latencetech_api.yaml up -d
Usage
A) Connectivity Insight API
Used with the /api/v1/ci
endpoint.
You can see the measurements for the agents using the agent_id
and customer_id
parameters:
curl -s http://<analyzer_IP>:12099/api/v1/ci?agent_id=<agentID>&customer_id=<customerID>
Or from your browser at:
http://<analyzer_IP>:12099/api/v1/ci?agent_id=<agentID>&customer_id=<customerID>
sample output:
{
"AgentID": "4",
"KPIs": {
"TcpMs": 49.32,
"UdpMs": 47.67,
"HttpMs": 48.2,
"HttpsMs": 45.02,
"IcmpMs": 53.1,
"TwampMs": 50.51,
"DownloadThroughputMbps": 200.22,
"UploadThroughputMbps": 437.88,
"NetworkLatencyMs": 51.8,
"ApplicationLatencyMs": 47.55,
"PacketLossRatePercent": 0,
"JitterMs": 4.68,
"VolatilityPercent": 31,
"NetworkStabilityPercent": 90.1,
"ConnectivityHealth": "Good"
},
"Metadata": {
"Time": "2024-04-29T20:22:51.254Z",
"AgentName": "US West Agent",
"Hardware": "Azure Server",
"NetworkName": "Test Network",
"NetworkType": "Wifi",
"Details": "Azure Agent used for demonstration purposes"
},
"APInotes": {
"comment": "Results from LatenceTech ConnectivityInsight API version 1.11",
"documentation": "Refer to docs.latence.ca for API details and data structure"
}
}
B) Throughput Insight API
Used with the /api/v1/lifbe
endpoint.
You can display the real time data of throughput measurements using the Lifbe protocol using the agent_id
and customer_id
parameters:
curl -s http://<analyzer_IP>:12099/api/v1/lifbe?agent_id=<agentID>&customer_id=<customerID>
Or from your browser at:
http://<analyzer_IP>:12099/api/v1/lifbe?agent_id=<agentID>&customer_id=<customerID>
sample output:
{
"agentID": "1",
"time": "2024-05-17T15:00:04.735Z",
"lifbeDownload": 542.48,
"lifbeUpload": 49.89,
"jitterDownload": 1.23,
"jitterUpload": 3.13,
"networkInterface": "MOBILE",
"networkType": "MOBILE_5G"
}
C) Latency Insight API
Used with the /api/v1/latency
endpoint.
You can see the measurements for a specific protocol for a specific agent in a user determined time range by adding 2 arguments to the query in addition to agent_id
and customer_id
parameters:
1) protocol (tcp, udp ,http, https, icmp, twamp)
2) time_range (in seconds)
curl -s http://<analyzer_IP>:12099/api/v1/latency?agent_id=<agentID>&customer_id=<customerID>&protocol=tcp&time_range=400
Or from your browser at:
http://<analyzer_IP>:12099/api/v1/latency?agent_id=<agentID>&customer_id=<customerID>&protocol=tcp&time_range=400
sample output:
[
{
"agentID": "1",
"time": "2024-05-17T15:01:26.183Z",
"measurement": "tcp_result",
"value": 15.61
},
{
"agentID": "1",
"time": "2024-05-17T15:01:28.215Z",
"measurement": "tcp_result",
"value": 15.233
}
]
D) Geoloc Insight API
You can use the mobile application (or a modem/CPE with geolocation retrieval enabled) and send GPS data to the analyzer, you will be able to check the current state of the latency as well as historical data with previous location.
Used with the /api/v1/geoloc
endpoint.
You can add the agent_id
and customer_id
parameters and an optional time_range
(in seconds) to get historical data:
curl -s http://<analyzer_IP>:12099/api/v1/geoloc?agent_id=<agentID>&customer_id=<customerID>
For historical data:
curl -s http://<analyzer_IP>:12099/api/v1/geoloc?agent_id=<agentID>&customer_id=<customerID>&time_range=400
Or from your browser at:
http://<analyzer_IP>:12099/api/v1/geoloc?agent_id=<agentID>&customer_id=<customerID>&time_range=400
sample output:
{
"agentID": "50",
"time": "2024-05-17T15:01:20Z",
"altitude": "39.1",
"latitude": "45.4961001",
"longitude": "-73.5619866",
"applicationLatency": 10.213
}
E) Twamp Insight API
This API can be used to query latest and detailed latency levels measured using the TWAMP Protocol (RFC 5357) for a given QoSAgent. The latency results are as follow: - TwampFwdDeltaMs = TWAMP Forward Delta (i.e. latency between QoSAgent -> Reflector) in milliseconds - TwampRevDeltaMs = TWAMP Reverse Delta (i.e. latency between Reflector -> QoSAgent) in milliseconds - TwampProcDeltaMs = TWAMP processing Delta (i.e. latency occurring within Reflector) in milliseconds
Used with the /api/v1/twamp
endpoint.
You can add the agent_id
and customer_id
parameters:
curl -s http://<analyzer_IP>:12099/api/v1/twamp?agent_id=<agentID>&customer_id=<customerID>
Or from your browser at:
http://<analyzer_IP>:12099/api/v1/twamp?agent_id=<agentID>&customer_id=<customerID>
sample output:
{
"agentID": "12",
"time": "2024-06-21T18:56:58.512Z",
"TwampFwdDeltaMs": 0.32,
"TwampRevDeltaMs": 0.94,
"TwampProcDeltaMs": 0.18,
}
F) Forecast Insight API
This API provides the latest forecasted measures using statistical based mean and projection.
Used with the /api/v1/forecast
endpoint.
You can add the agent_id
and customer_id
parameters:
curl -s http://<analyzer_IP>:12099/api/v1/forecast?agent_id=<agentID>&customer_id=<customerID>
Or from your browser at:
http://<analyzer_IP>:12099/api/v1/forecast?agent_id=<agentID>&customer_id=<customerID>
sample output:
{
"agentID": "1",
"time": "2024-06-21T18:56:58.512Z",
"projectedLatencyMs": 1.5,
"forecastingIntervalMs": 3.28,
"confidenceLevel": 0
}
API Definition
An API definition file complies with the OpenAPI standard is available in yml format.
You can download the file here:
wget https://api.latence.ca/software/latencetech_api_definition.yml