Introduction
This is part of a series of articles where we describe the way the Meniscus Analytics Platform (MAP) works. Theses articles jump into the features that make MAP different to other analytics applications by providing an Integrated Analytics Stack delivering real time analytics. In this article we discuss Data Blocks and Data Versioning.
In delivering real time analytics, disk IOPS (Input/output Operations Per Second) is one of the main rate limiting steps in achieving the calculation speeds required when processing high volume and high velocity raw data. An example of such a data is radar rainfall data where new values covering a large area arrive every 5 minutes.
To help reduce disk IOPS, we developed the concepts of Data Blocks and Data Versioning into MAP to drastically speed up data access, increase calculation speed and reduce the volume of data written back to the database.
Data Blocks
Rather than loading and persisting all data for an Item, data can be broken up into chunks called Blocks. So, only the chunks of data that are demanded for a query, or as an input to a calculation, are loaded from the database (i.e. delay loading), and only the chunks of data that actually change need to be persisted. Blocks are typically used with unbounded, time-related data such as sample arrays, where the size of a Block is limited and the maximum number of Block samples depends on the size of a sample. This provides efficiencies in real-time processing, whereby data changes are localised and typically at the end of the data.
Data Blocks are transparent to the user. It is purely an internal mechanism to reduce traffic to/from the database. When requested or persisted, Data Blocks are held in memory for a time. This ensures future retrieval is temporarily faster as the data is expected to be in demand.
Data Versioning
Data Blocks are complimented by the MAP concept of Data Versioning. All Item data in MAP is versioned, including Blocks (as such referred to as child data). A version is simply a unique timestamp. It allows users to query for the relative age of data. Specifically, when it last changed, and for calculated Items when the last calculation started and completed. A client application can then tell if data has changed without having to load the data itself. There are additional non-data versions on an Item. I.E when its properties or list of child items last changed.
It is this versioning technique that allows MAP to efficiently detect when calculated items need recalculating (referred to as dirtying as calculation).
About MAP
MAP is an Integrated Analytics Stack providing a framework for users to create and deploy calculations at scale using any source of raw data. MAP is based on IOT principles and uses Items as the underlying building blocks to store either RAW or CALCulated data. So, users create an Entity Template or Thing using these Items and then replicate this template hundreds of thousands of times using an ItemFactory.
For more information on MAP then click here
MAP Rain case study – Leeds City Council and storm Ciara
Case Study overview
Leeds City Council use MAP Rain to analyse historic rain events and to receive forecast rain alerts. This is a summary of the alerts generated for storm Ciara on the 9th and 10th February 2020 and compares actual rainfall against the alerts generated.
MAP Rain supports different type of rainfall alerts. The alerts for Leeds include:
MAP Rain constantly reviews and calculates these alerts every 5 minutes as new radar and forecast rainfall data is received. E-mails are sent to the Flood Management team according to a set of rules designed to limit the number of e-mails sent.
Over the last two years, the Leeds City Council Flood Management team have been very pro-active in reviewing and adjusting these threshold values in light of historic flooding events. They have also adjusted the location of the monitoring points to correlate them more precisely to locations at risk.
Quote from John Bleakley Group Engineer (Investigations) on the 18th Feb 2020.
Actual rainfall depth – issued by Leeds City Council flood team on the 10th February
The choropleth map shown below was created by the Leeds City Council flood team the day after the event using data from MAP Rain. The image was tweeted to residents the day after the storm and is a great example of the use of maps to communicate the location and depth of the rainfall that affected the community.
MAP Rain FEH rain event Alerts
For storm Ciara, MAP Rain generated alerts for 5 catchments and 18 monitoring points. These alerts were e-mailed to Leeds City Council and latest alert status was displayed on the MAP Rain dashboard.
Alerts for the monitoring points were generated earlier than the catchment alerts and also changed more frequently. This is a consequence of the inherent averaging associated with the catchment alerts. The catchments aggregate the rainfall for all the underlying rainfall cells and this averaging, or smoothing, means they react slower to changes in the forecast rainfall.
The two images below display the results for the first FEH Alerts generated for the storm. The left-hand image shows the first catchment alerts (shown in yellow). The right-hand image shows the first point-based alerts with their corresponding colour coded legend. The numbers are the calculated FEH rain event predicted for the catchment or point.
The first monitoring point alerts were generated at approximately 09:45 on the 8th February with the majority generated from about 20:00 on the 8th, some 4-5 hours before the start of the heavy rain.
The first catchment-based alerts were created later at about midday on the 9th February.
Home Energy platform upgrade – new Widget graphs
Home Energy platform upgrade lets users create their own dashboards
We have upgraded the Meniscus Calculation Engine (MCE) on our free-to-use Home Energy monitoring platform. We are phasing out of the existing Silverlight dashboard as it only runs on Internet Explorer over the next couple of months.
Example of an MCE real-time widget
This graph will update every 2 minutes.
New Widget graphs and getting your API key
The widgets let you easily create your own dashboards that you can configure yourselves.
To use the widgets you will need your API key which is the first 16 characters of the email address that you used to create your account with us. Please note a couple of things.
If you would like to change your API key please send a mail to [email protected] and quote the e-mail address you use for your Home Energy account
To create the widgets
When you are ready – select the Output HTML. This will generate the iframe code that is can be pasted into any HTML web page to generate the graph displayed.
Create the Dashboard
Follow these steps to create an HTML dashboard with the widgets you have created. This will give you a lot more visibility for the Home Energy platform.
For more information on the MCE widgets click here
See an example of the MCE widget dashboard
MAP IoT Entities – Introduction
MAP IoT Entities – an Introduction
MAP IoT Entities give you control on how to turn raw data from a sensor, device or …anything, into the analytics you want. Entities include Raw Items and Calc Items and both are contained in an Entity Template. This Entity Template is called as often as you want by importing configuration files. These config files include names and properties of the Entity and the properties for the Raw and Calc Items. On import, the Entity Template creates the Entity and the Raw and Calc Items and immediately starts processing any raw data that is available.
This is the first of several articles that we will write on how to use MAP IoT Entities to deliver your IoT application.
What are MAP IoT Entities?
Raw Items
A Raw Item contains the raw data that you upload into MAP. Raw data contains any Data Type that you want. If we don’t support the Data Type already (we support quite a range) then you can create your own Data Type – article on Data Types.
Calculated Items
A Calc Item contains the metrics that you want to create using your raw data. Rather than create all your analytics in one complex algorithm, our experience is that it it is easier, more flexible and quicker to create a number of seperate Calc Items that each do a specific part of the analytics.
A core module of MAP is the Invalidator. This continually monitors the calculation time of all Items in MAP and dynamically builds a dependency tree of all Items. By defining the type of invalidation relevant to your Calc Item, you control when and how frequently your Items are updated and re-calculated. The default mode is to invalidate on change of latest calculation time. So, if the latest calculation time of a Raw or Calc Item in the Dependency Tree changes then other Calc Items that are dependent on that Item will automatically recalculate.
What this means in practice is:
Why use MAP IoT Entities – what are the benefits?
The key reason is simplicity. They are easy to use, easy to set up and offer a lot of flexibility.
MAP is an integrated stack so we do all the complicated plumbing required to deliver the calculated metrics you want. So, all a developer needs to consider is:
That’s it – MAP takes care of everything else!
What is MAP?
MAP stands for the Meniscus Analytics Platform and is MAP is our IOT Analytics Platform for delivering solutions at scale and at speed. It is an Integrated Analytics Stack so you can develop your solutions quicker and easier.
More information on MAP IoT
More information on MAP
New MAP IoT Gateway device
Our new IoT Gateway device makes it easier for developers to connect to MAP directly from devices.
The gateway runs as a Windows Service on the IoT device or on a Raspberry Pi or a micro PC. It uses a MAP importer to push and pull data from the device directly into MAP. So, the gateway allows bi-directional flow of data making it possible to send instructions from MAP back to the IoT device.
Within MAP, making use of our IoT Entity model, developers can create templates containing Items and Properties and add any calculation they want.
For more information on MAP then click here
MAP Rain – new forecast alert dashboard
Our new MAP Rain alert dashboard now makes it much easier to keep track of forecast rain alerts for Points and Polygons in your area of interest. The alert dashboard updates every 5 minutes, as new rainfall data arrives, and updates the colour of your Points and Polygons depending on the level of flood risk. The alert dashboard also displays an animation of the forecast rainfall across the UK
The new forecast alert dashboard provides a simple way to see the alert status of your Points and Polygons along with an annimation of the forecast rainfall for the next 36 hours. We have designed this as a simple way to monitor flooding risk 24/7.
For a live demonstration of MAP Rain then click here and set Username and Password both to demo.
Toggling between current (Query) view and new Alert view
We have added a new button which switches between the existing view of the dashboard and the new Alert view.
New Alert view
The purpose of this new Alert view is to provide customers with a simplified 24/7 view of the rainfall and associated alerts in their area of interest.
Can switch back to the original dashboard view using the “Switch View” button.
Lazy loading for processing large data sets
Introduction
This is part of a series of articles where we describe the way the Meniscus Analytics Platform (MAP) works. Theses articles jump into the features that make MAP different to other analytics applications by providing an Integrated Analytics Stack delivering real time analytics.
This article investigate the benefits of lazy loading of data and why this is important in MAP
What is lazy loading of data?
Quite simply, it means only loading the part of the data that is required to deliver the information requested. In terms of how MAP works then this principle is used to limit the data input and output from the the underlying MongoDB database into MAP. Whilst this may sound like quite a simple and obvious principle to apply it isn’t always used. Many developers will know the principle when developing dashboard and user interfaces but it is more important when considering the back end database operation.
Source
Why is lazy loading relevant in MAP?
MAP ingests and processes very large volumes of near real time data, specifically data associated with weather. More importantly, MAP holds historic data so that we can deliver historic analytics as used in our MAP Rain solution.
This means data IO is a key factor in delivering the lighting fast calculation speeds that MAP delivers. So, anything that can improve these IO times is of huge importance to MAP. Lazy loading reduces data volumes extracted and then written back to the database and so improves data IO times.
About MAP
MAP is an Integrated Analytics Stack providing a framework for users to create and deploy calculations at scale using any source of raw data. MAP is based on IOT principles and uses Items as the underlying building blocks to store either RAW or CALCulated data. So, users create an Entity Template or Thing using these Items and then replicate this template hundreds of thousands of times using an ItemFactory.
For more information on MAP then click here
Support for rich and extensible data types
Introduction
This is part of a series of articles where we describe the way the Meniscus Analytics Platform (MAP) works. Theses articles jump into the features that make MAP different to other analytics applications by providing an Integrated Analytics Stack delivering real time analytics. IN this article we talk about extensible data types.
This article discusses how and why having extensible data types is a real benefit when developing your analytics applications
Why are extensible data types important?
Being able to use a wide variety of ‘standard’ data types, but also to create your own, delivers lots of benefits.
Examples of data types supported by MAP
We have a number of ‘standard’ extensible data types already configured in MAP but there is no limit to the number or variety that you can create.
Examples of data types
About MAP
MAP is an Integrated Analytics Stack providing a framework for users to create and deploy calculations at scale using any source of raw data. MAP is based on IOT principles and uses Items as the underlying building blocks to store either RAW or CALCulated data. So, users create an Entity Template or Thing using these Items and then replicate this template hundreds of thousands of times using an ItemFactory.
For more information on MAP then click here
Benefits of a dynamically constructed dependency tree
Introduction
This is part of a series of articles where we describe the way the Meniscus Analytics Platform (MAP) works. Theses articles jump into the features that make MAP different to other analytics applications by providing an Integrated Analytics Stack delivering real time analytics. This article discusses the benefits of a dynamically constructed dependency tree.
What is a dynamic dependency tree?
A dependency tree is a list or tree of the way that any Item links to other Items. We use this to manage and understand which Items are required when calculating another Item. So, if Item 1 requires Item 3 and Item 2004 to calculate then any change in Item 3 or Item 2004 will place Item 1 on the calculation queue to be recalculated. The process of managing the Items placed on the queue is critical to MAP and we have a separate Invalidator module specifically to do this.
While our old MCE analytics platform held a dependency tree it was not dynamic and so, not really a scalable solution. MAP uses a dynamic dependency tree so that as new Items are added then MAP automatically creates its own tree by learning from the calculations as they run. This in turn means that MAP is scalable and can run on any size of database.
Benefits of using a dependency tree
About MAP
MAP is an Integrated Analytics Stack providing a framework for users to create and deploy calculations at scale using any source of raw data. MAP is based on IOT principles and uses Items as the underlying building blocks to store either RAW or CALCulated data. So, users create an Entity Template or Thing using these Items and then replicate this template hundreds of thousands of times using an ItemFactory.
For more information on MAP then click here
Using Data Blocks and Data Versioning to deliver real time analytics
Introduction
This is part of a series of articles where we describe the way the Meniscus Analytics Platform (MAP) works. Theses articles jump into the features that make MAP different to other analytics applications by providing an Integrated Analytics Stack delivering real time analytics. In this article we discuss Data Blocks and Data Versioning.
In delivering real time analytics, disk IOPS (Input/output Operations Per Second) is one of the main rate limiting steps in achieving the calculation speeds required when processing high volume and high velocity raw data. An example of such a data is radar rainfall data where new values covering a large area arrive every 5 minutes.
To help reduce disk IOPS, we developed the concepts of Data Blocks and Data Versioning into MAP to drastically speed up data access, increase calculation speed and reduce the volume of data written back to the database.
Data Blocks
Rather than loading and persisting all data for an Item, data can be broken up into chunks called Blocks. So, only the chunks of data that are demanded for a query, or as an input to a calculation, are loaded from the database (i.e. delay loading), and only the chunks of data that actually change need to be persisted. Blocks are typically used with unbounded, time-related data such as sample arrays, where the size of a Block is limited and the maximum number of Block samples depends on the size of a sample. This provides efficiencies in real-time processing, whereby data changes are localised and typically at the end of the data.
Data Blocks are transparent to the user. It is purely an internal mechanism to reduce traffic to/from the database. When requested or persisted, Data Blocks are held in memory for a time. This ensures future retrieval is temporarily faster as the data is expected to be in demand.
Data Versioning
Data Blocks are complimented by the MAP concept of Data Versioning. All Item data in MAP is versioned, including Blocks (as such referred to as child data). A version is simply a unique timestamp. It allows users to query for the relative age of data. Specifically, when it last changed, and for calculated Items when the last calculation started and completed. A client application can then tell if data has changed without having to load the data itself. There are additional non-data versions on an Item. I.E when its properties or list of child items last changed.
It is this versioning technique that allows MAP to efficiently detect when calculated items need recalculating (referred to as dirtying as calculation).
About MAP
MAP is an Integrated Analytics Stack providing a framework for users to create and deploy calculations at scale using any source of raw data. MAP is based on IOT principles and uses Items as the underlying building blocks to store either RAW or CALCulated data. So, users create an Entity Template or Thing using these Items and then replicate this template hundreds of thousands of times using an ItemFactory.
For more information on MAP then click here
MAP Sewer – creation of simplified sewer network models
We have been working to speed up the creation of the simplified sewer network models in MAP Sewer so that we can rapidly create new models for new catchments. We have now automated the process of creating the main simplified model, and all the relevant geometries, from the detailed GIS layers that make up the ‘standard’ detailed models used by most water companies.
The objective of this work is:
The methodology includes:
The process takes several hours to run and the outputs are:
Once this is done then we can add some of the pumping attributes to the Pumping Station and Detention Tank geometry files and then load all the files into MAP Sewer from the dashboard. MAP Sewer then creates the geometries in a few minutes and the whole catchment is calculated in 20 minutes – this includes over 2 years of historic data all at 5 minute periodicity. We can now start to validate the model and to feed it with real time and forecast rainfall data.