Who wants to hang out in messy data?
Richard Senft, founder of Amber Waves VR, says to visualise "really big" datasets in VR you'll "require a considerable backend that uses machine learning or deep learning to find patterns in data". Those lacking a statistics background will also need some kind of artificial intelligence assistant to select what dimensions to visualise.
The focus of Amber Wave's machine learning is on reducing the number of dimensions in a dataset, so it is easier to understand, says Senft, who is building for Oculus Rift and HTC Vive.
The $4m-backed Virtualitics, currently in beta testing, also makes much of its machine learning backend. It doesn't use a hyped technique such as a neural network, but rather the dependable Random Forest method, with some of its models drawn from the SciKit-Learn library. Users select a variable they want to understand – money spent per customer, say – and the system automatically comes up with a set of explanatory variables, chooses a graph type (a scatter plot or histogram, for example) and then visualises the data using the three axes, colours and shapes.
Platforms in development tend to use simple tabular data formats like CSV as the foundation of their data visualisation work, but Virtualitics is going to add support for SQL and include connectors to other databases and data stores – including SAP. This will allow users to get data from different databases and write custom SQL queries, the company says.
But visualising big sets of messy, unstructured data – like the masses of text, videos and pictures generated on social media, for example – is a challenge for VR, just as it would be in 2D or 3D on a screen, because it still requires a way of structuring and cleaning it up before it can be represented graphically.
3D can be a better medium than 2D for displaying things like connections between social media users, thinks Daden's Burden, as plotting connections between nodes on a flat screen leaves you with a messy "birds nest" of lines. But to tease the shape out into a more understandable 3D form requires a good layout algorithm, he says.
VR or AR, in other words, is only the viewing medium – on its own, it may not help us sort unstructured data into a meaningful shape. Michael Peters, a Hololens developer for In-Vizible experimenting with visualisations in AR, agrees. "I don't think AR is really going to solve messy data," he says.
Which way are folk leaning?
But let's assume you've bought into AR and VR as a concept, and want – or have been told – to build something. Where exactly are people?
The first issue developers are grappling with is whether data is best viewed in VR or in AR. You have HoloLens on AR and everybody else on VR. "With VR you're going to be stuck with avatars," says Peters. The Virtualitics platform allows two people, represented by translucent ghostly heads, to analyse the same data simultaneously. "But with AR, I can interact with real people."
VR is seen as more suited for walking through data at room-scale, whereas with Microsoft's Hololens users tend to view a visualisation on a table-top that they walk around – as with LOOOK's KPMG platform. In Peters' Hololens visualisations, the user stays still and the data points move around them, like planets round a star.
One common complaint is that the Hololens has too narrow a field of view; Microsoft has described it as like looking at a 15-inch screen from two feet away. VR, on the other hand, envelops far more of your vision, making it much easier to believe you are surrounded by data points.
This is the reason why Virtualitics has chosen to use VR headsets, co-founder Ciro Donalek told me. "We hope they [Microsoft] will improve the hardware soon and we will be ready to support it."
VR has its own issues, of which reading text is one. Often this is an uncomfortable experience because of the relatively low resolution making it tricky to switch to viewing the underlying data. Text in the Hololens, by contrast, is "super crisp and very, very clear," says Peters.
Assume you've got past the VR versus AR question, what next?