At their network resilience hackathon, arranged by the ODI in Leeds on the 10th July 2018, Yorkshire Water released a closed set of operational data relating to the performance their sewer networks. This included data from some of their Combined Sewer Overflows (CSO) along with historic data on sewer flooding around the region and a range of other related data.

This closed set of data was made available at the beginning of the day and deleted from PC’s and servers at the close of play – so a really intense hackathon to deliver some value added data.

Up to 6 companies battled it out to deliver and develop their solutions for presentation to the CEO of Yorkshire Water, Mr Richard Flint, and to the Chairman of Ofwat, Mr Johnson Cox.

Additional information:

How we developed the solution using MAP


This was an intensive day where 6 teams from a range of companies look to add value to the core datasets provided by Yorkshire Water and helping them answer the question of where best to place 8,000+ sensors in their network.

Most teams opted to display the risk to assets by layering the datasets onto GIS platforms to identify the potential hotspots around the region.

Our solution differed in that we wanted to make use of the operational data, so we fused it with 5 minute historic rainfall data to identify Combined Sewer overflows (CSOs) that operated outside of expected wet weather events. This was all built using MAP IOT data analytics


Our team comprised Mike Everest working in Leeds supported by Ian Hannah ( CTO) and Ibok Kegbokokim working remotely from the Huntingdon office. The solution was built using the IOT analytics capability of the Meniscus Analytics Platform (MAP) allowing us to rapidly develop a complex big data solution to this problem. At the end of the day we had a working prototype displaying two sets of data and with the following functionality:

  • A working dashboard showing some 450 out of several thousand properties that flooded in 2017/18
  • For each property we displayed the rainfall at that location and compared historic operation against the rainfall return period on that day – this gave us an understanding of the magnitude of the rain event that may have led to the flooding
  • A working dashboard displaying CSO operational data for the 6 locations provided to us. For each CSO we displayed the associated rainfall and ran an off site analysis, using the MAP API and R, to determine a ‘Window of Operation’ for each CSO which is the start and end time that we expect the CSO to operate after a rain event.
  • Using these sets of data allowed us to prioritise the properties that are at risk of flooding based on how they reacted to actual rain events. For the CSOs then this technique identifies those that have the shortest response to a rain event i.e. those that respond very quickly to rainfall.

    In order to deliver this we completed the following tasks in MAP:

  • Created a template for the CSO. This included the Item for the raw data as well as Items for the Window of Operation start and end times
  • Created a template for the properties that included a raw item for flooding events and that allowed the addition of a number of such events
  • Imported both CSOs and the Properties into MAP using a standard CSV file importer that created all the individual CSOs and Properties from their respective templates
  • Created a script in R that called the relevant data from MAP using the MAP API and identified the Window of Operation times
  • MAP Templates

    MAP templates allow users to create their own calculations and variables in MAP. Once you have set up your initial template and checked the calculations etc then you can replicate the template to thousands….hundreds of thousands of Things or Entities. Everything is created through the MAP browser based web client using MAP IOT data analytics.

    To replicate the template you create a CSV file including all the variables, the names of your Entities and any properties that you want. Include the full path name of the template you wish to use and then import the CSV using the MAP web client.

    MAP imports the Entities and applies them to the template which creates all the data structure and associated calculations. Once the Entities are created its just a matter of adding the raw data, historic or real time, and MAP starts calculating everything in the background.