In this featured video I show two versions of my Beta UnitySA (Beta 30) running side-by-side. On the left side we are visualizing around 100,000 Snort IDS event clustered into nearly 7,000 nodes and 7,000 edges (14,000 objects) and on the right the visualization is over 400 near real-time TCP connections on the same server. In each visualization, I run some very simple gaming-style AI which seeks out nodes-of-interest and travels to each of these nodes-of-interest. In the first segment of the video the orbit traverses only “priority one” nodes. The second half of the video I toggle a key or two to traverse “priority two” nodes in each graph, using the same orbits in both windows. Please note I turned off (toggled off) all the user information text (controls and node attributes) and only show the graph in this video.
I have been making more progress early this year that I originally thought. The visualizations are coming along nicely and my coding is getting more efficient so even thought I am running this on a three year old MacBook Air with only 4 GB of RAM, it runs “OK”. The temperature of the hot area of the MBA has increased to around 46 degrees C, so it is running hotter, but not too hot yet.
On the back end, creating the force-directed graphs (FDG) of 100,000 IDS events takes around 10 minutes on an 8 core Linux server with 32 GB of RAM with minimal traffic and about twice as long if I do the FDG processing on a more active web server with 64 GB RAM and 16 cores (better hardware but less CPU available). The XML file for the FDG with all the nodes and link attributes has grown to around 2.4 MB. So, the scaleability issues are with the graph processing and the XML file transfer (and loading into the visualizing app) across the network.
I have a plan to rewrite the code to only send graph updates across the network later this year (and not the entire XML file each time I reload); so that will help with the file transfer limitations. This also prepares my code for real-time updates later this year. On the graph processing side, it looks like we will need to move to a distributed processing environment (like Apache Spark or Hadoop) later this year as well.
Overall, the proof-of-concept is going better than I originally anticipated. I’m comfortable with the technology and the coding. Now I’m preparing to move to more advanced graph processing soon; but I’m having too much fun now with orbits and gaming AI.
Because this is going so well, I may stick to 3D game programming for a while and hold off on moving to VR. There is a lot of interesting tasks to do in 3D without VR and I’m not “jumping up and down” to put a heavy VR headset on. I think it’s easier to code without a big clunky VR headset, LOL. Maybe we will get lucky and someone will improve VR technology so it will be much less “headset intensive” sooner than later. I hope so, anyway.