In late November Massive Analytic Limited (MAL) took part in a Hackathon jointly organised by DASA, Oracle and the event team from bemyapp.com. As a little background for those not familiar with it DASA (Defence and Security Accelerator) is part of the UK Ministry of Defence, with a focus on defence innovation and partnering with organisations from academia, industry and government in the defence and security arena.
The DASA Defence Logistics Hackathon was an opportunity to meet companies in related fields and others offering unique services for the military. It was also an interesting a coding challenge for the team! Organised under the umbrella of the USA, UK and French military alliance and the shared problems of access to the Hercules C-130 platform, the challenge was to demonstrate the ability to analyse and share structured and unstructured multi-source data while maintaining its classification and permission based access rules. There was also a strong focus on AI/machine learning techniques to provide insights on the data in the long term once the access issues were resolved. As far as challenges go this one seemed right up our alley!
The hackathon started with some introductions hosted by the enthusiastic bemyapp team. As well as introducing the problem, the function of these introductions was to aide team formation from the assorted attendees. MAL formed part of a team including individuals from Troika Solutions, DIEM Analytics, UCL and PA Consulting. We gelled very well as a team and self-organised quickly, with everyone being given a role to play.
From the onset the time pressure was intense, 3 hours in and we still hadn’t gotten hold of all the data supplied by Oracle’s RDBMS. There was a contention issue on the network with the other teams, that resolved when we were able to import all the data into our Oscar:DataScience (Oscar) platform via the JDBC interface. While this happened the other team members had already started though to build the access and group modelling .
None of the non-MAL team had used Oscar before so this was a good test of the intuitiveness of the user interface. To Oscar’s credit (and also the teams as well) after a quick introduction the Trokia team was busy creating user grouping for the access management whilst the data scientists from DIEM Analytics and PA Consulting were busying themselves with Oscar transforms and the scripting interface.
Once we had all the data imported into Oscar the next step was to put together a white paper for our strategy. This centred on the role of the C-130 Fleet Manager and was comprised of the following points:
A) Bringing the data together into one cohesive platform enabled our chosen use case(s) to be addressed and leveraged by the Oscar Workflow tool (described below) This allowed the use cases to be tackled in the following way:
Maximise the utilization of the assets through understanding the time to schedule repair to minimise the amount of downtime
Reducing the logistical burden by optimizing the supply chain, based on the base location (i.e. cooler climates may require fewer cooling systems)
Increasing the probability of mission success by tailoring the aircraft selected for each mission to the mission objective
B) We were able to enforce user permissions and data restrictions with the use of one value
C) This provided the best option because it supports providing the foundation for the follow-on iterations that provide the real operational value, while also supporting scalability and tailor-ability based on operational needs
…And that was the end of day one.
At the start of day two we had two Oscar enhancements under way, one was to implement an LDA clustering model capable of cleaning comments from technicians’ notes in the dataset - an NLP process that not only simplified the data but made it easily searchable by topic. The second enhancement was an addition to the data security model; adding a meta-column to a dataset so we could define row level security access. We’ve proved this was working by testing as different users in an Oscar dashboard with spreadsheet and KPI charts. We also employed Oscar user groups, including a new Exclusive Groups feature, which allows for dataset access based on membership of more than one group, perfect for the military use cases of differing security clearances combined with job roles.
The team were also hard at work building some additional data science insights. The first of these was built on Oscar’s data cleansing features and populated missing tail numbers for the dataset rows that didn’t identify the relevant C-130 aircraft. An ultimate goal for the data would be to allow for predictive maintenance, further to this we created a decision tree to determine the biggest predictors to remaining flight time, and this highlighted the repair severity of certain components as the biggest predictor.
With time rapidly running out, we put together our insights. The user groupings had already been demoed the day before as part of the ‘accuracy test’ but we now needed to script the text for the demo and get pull everything together for the finale with all the other teams. The demo was quite an affair with 11 teams showing their solutions, complete with talks and powerpoint slides. We took our place as the third to demo and did quite well if I do say so myself.
Unfortunately we didn’t come out as the winners and I don’t know what our score was, but it was a fun couple of days where we all learnt something and met new people, while both adding capability and showcasing what our Oscar platform could do!