The video screen capture in the post represents a 3D view of 14693 Snort IDS Alerts clustered into 1726 nodes and 1726 edges rendered as a force-directed graph. This video clip would look a lot better with a state-of-the-art graphics card; and thank you for your patience until I buy one. Also, the limits of my full video screen capture app cuts off the top of the screen where node data is displayed as a node passed through the center of the screen. My apologies for not creating better video screen captures. I’ll improve this someday into the future, perhaps just taking a movie of the screen with a movie camera and tripod to get better quality. Anyway, few people read this blog since I came back on the scene last year, so I’m not feeling any pressure to improve my video screen captures at the moment.
Continuing to work on big data visualization and scaling, I’m seeing a huge slowdown on my two year old MacBook Air as the nodes approach 2300. This slowdown occurs for a number of reasons; but the high-level cause is that an update() loop in the gaming engine, necessary for creating situational knowledge, runs on each node. Naturally, as the number of nodes increase performance suffers; especially in the graphics processor (GPU). I’m working on way to offload the tasks on each node in the graph; however, the core issue on the road to situational awareness in cyberspace is hardware.
This hardware performance issue on the visualization / graphics processor side underscores the necessity to offload as much processing as possible prior to loading the data to the visualization engine.
Thanks to Jason Graves, I was able to save time using his FDG C++ code that runs on Linux and focus on coding other parts of this research project. We were originally going to write the FDG code in C# and run the code inside the visualization engine; but benchmark tests prove that it is better, for performance reasons, to run the FDG code on the back end. Of course, this might change in 5 or 10 years into the future…..