Transforming Neural Network Visualization and Interaction
About the Client
Company’s Request
Technology Set
Sirin Software started by completely redesigning the layout of the NeuroCAD tool to make it more professional and user-friendly. This involved using HTML to structure the content, CSS to style, and JavaScript to make the interface interactive and compatible across different devices and screen sizes. React’s component-based design allowed us to create reusable UI elements, which improved performance and made the interface easier to maintain and update.
On the server side, we used Node.js to handle client requests and server logic efficiently. This enabled the application to manage high traffic volumes and provide quick response times, which is required for a tool that needs to process large amounts of data quickly.
We chose MongoDB to efficiently store and manage extensive data related to neural networks and complex data structures such as neuron details, layer configurations, and connection parameters.
For visualization, we integrated Qt and CUDA to render complex neural network structures and connections. Qt provided a framework for data visualization, allowing us to create detailed visual representations of neural networks. CUDA enabled GPU acceleration, which significantly sped up the rendering process. This was important for handling large neural networks with many layers and connections, where performance can become a bottleneck. If Qt visualization was slow, we used OpenGL as a fallback for performance when Qt alone couldn’t meet the performance requirements.
We added features to delete layers, extract rendering logic, and switch between CUDA and Qt for visualization. The ability to delete layers was needed to manage and modify neural network designs easily. Extracting rendering logic from the Neuron Layer class allowed us to combine it with other visualization methods, providing more flexibility in how neural networks are displayed. Allowing users to switch between CUDA and Qt for visualization gave them the option to choose the best rendering method for their specific needs, balancing performance and visual quality.
A complete unit testing framework was put in place to guarantee that the interface visualization and error handling classes worked properly. Our team added tests for various visualization classes to verify proper rendering under varied scenarios. Error handling was also an important aspect of this framework, as it guaranteed that the program could handle and recover from unanticipated problems.
We used Python to create 2D maps that reflected the connection probabilities between neurons at different levels. These maps were used to generate procedural designs for neural network connections, guaranteeing that connections were distributed in accordance with certain patterns and regulations, which is important for building authentic and effective neural networks.
The connection page was significantly enhanced, allowing users to draw connections using Qt visualization and control the genome through the GUI. We added functionalities to draw different types of maps and widgets for each map type, making the tool more flexible and powerful.
Users could specify connection properties with a user-friendly interface, such as the number of connections per neuron and the connection weights. This made it easier for users to design and modify neural networks according to their requirements.
For training, our team created a detailed training page that connected all fields to the Genome class, enabling testing on various types of training data. The training page was designed to be comprehensive and intuitive, allowing users to set up and monitor training sessions. We added features like a directory picker for different training data formats, a progress bar, and live training result updates to improve user experience and efficiency. The directory picker allowed users to select and organize training data in various formats, including images, videos, and text. The progress bar provided real-time feedback on the training process, while live updates allowed users to see the results as the training progressed.
The training process was enhanced to include both one-sided and two-sided training, setting up training parameters, and evaluating network performance against test criteria. One-sided training focused on training a single neural network, while two-sided training involved training two complementary networks that worked together.
We added the ability to set various training parameters, such as the length of the training epoch and the maximum training time, to give users control over the training process. The performance of the trained networks was evaluated against predetermined criteria, to evaluate the best-performing networks for future development.
The selection and breeding pages were developed to manage the lifecycle of neural networks, from selection and deletion to breeding. We implemented logic for breeding networks on local computers or AWS, allowing for scalable and efficient processing. The selection page allowed users to select the best-performing networks based on their training results.
The breeding page provided tools for crossbreeding selected networks to create new generations with improved performance. We added unit tests to ensure the functionality of these features, maintaining the system’s reliability and accuracy.