As part of an Data Structures and Algorithms projects during my masters, we were challenged to solve the 2016 Google Hashcode. The problem set by google revolved around deciding which videos to store in which local data centres which have limited capacity. Particular user groups access particular videos so I needed to determine the most efficient storage location the videos.
The first task was to code the test algorithm which Google provide that generates a score of the system as it currently stands. Once this was done, we could start to look at potential solutions to the problem.
The project required us to attempt a solution with genetic, hill climbing, random search and simulated annealing algorithms. Hill climbing was the least efficient of all the algorithms but it got a better answer more consistently. The genetic algorithm and simulated annealing were more efficient and returned an answer more quickly but that answer was more likely to be poorer. The random search algorithm was the worst in both categories being neither efficient nor returning good results.
The project can be found on GitHub.