Kovács, LászlóSáfrány, Artúr2020-05-122020-05-122020http://hdl.handle.net/2437/286925This thesis provides an overview of different faces of parallel computing and highlights the importance of parallelization through deep learning. It covers the core aspects of neural networks. Furthermore, it gives an insight into a self-made implementation called Borjomi. It gives a brief historical overview of parallelism, and without going into deep technical details, it covers various types of it, including task-level, instruction-level, and data-level parallelism. About neural networks, it presents some core aspects, the main components and the basic workflow, while concentrating more on technical and implementation details rather than the scientific background. After an overall overview, the focus is more on convolutional networks. At the end of the paper, the last part provides insight into my own deep learning implementation called Borjomi, which is a lightweight deep learning framework, specialized for convolutional networks. It presents the main structure of the project, the tensor implementation, and the management of layer connections. Also, it highlights the capabilities and the supported architectures which can be used to boost Borjomi with different kinds of parallelization techniques.37enparallelisationneural networksdeep learningmultithreadingIntroduction into parallel computing through deep learningDEENK Témalista::Informatika