Crafting experience...
10/26/2025
A Project Made By
Submitted for
Built At
Gator Hack IV
Hosted By
What is the problem you are trying to solve? Who does it affect?
Picking the perfect movie to watch is difficult. Flipping through the genres takes much too long, especially when you have a bowl of snacks waiting to be eaten. Don't let movie night be ruined by indecisiveness, LetterBot is here to save movie night. According to a study by Nielsen, (https://www.nielsen.com/news-center/2023/nielsens-state-of-play-report-delivers-new-insights-as-streamings-next-evolution-brings-content-discovery-challenges-for-viewers), viewers spend roughly 10 minutes on average searching for the "right" movie. Where using Sneaky Link would get you the perfect movie night every time in a matter of a couple minutes. Sneaky Link is here to help across all viewer demographics.
Idea Explanation
What is your idea? How does it fix the problem?
Our idea is to create a Letterboxd web scraper that would collect data for movies and compile them so users can compare movie options. Our tool will allow viewers to drastically decrease the time they waste debating on which movie to watch and more time enjoying their perfect movie of the night.
How do all the pieces fit together? Does your frontend make requests to your backend? Where does your database fit in?
We decided to optimize our solution with the tools we had available to use through a framework we designed for data scraping. For our front end, we had a simple Jupyter notebook to guide the users on setting up the virtual environment. When the users ran the chat bot via Jupyter, they would then be able to type in their prompts for movie information if needed. Then, the chat bot would send the information to our helper data-scraper function ran by selenium. Our back-end primarily comprised of our data-scraper helper function that would access the letterbox website selenium and scrape information of the movies. It would go through the list of movies inputted into the function by the chat bot and then search them up with Webdriver. When the movie is pulled up, the information is collected then fed back into the chat bot. We also had a simple memory management system so the chat bot would remember its past interactions with the user.
What did you struggle with? How did you overcome it?
One huge challenge we came into contact with was the ethical use of data scraping for different company websites. Our original plan was
to data scrape LinkedIn or Letterbox, however both websites required formal business contacts for API use. Due to the short notice on these websites policies, we decided to adapt and utilize a selenium bot to remotely data scrape. LinkedIn's policy strictly prohibited such actions so we transitioned to utilizing the Letterbox website to datascrape for move information. Thus, we landed at our project for creating a AI chatbot to help consolidate information about movies.
What did you learn? What did you accomplish?
During our brainstorming, we knew we had to make a application to solve the crisis of disagreements on movie night. Thus, we decided to implement a framework where the datascraper and ollama AI would be able to communicate with each other. We learned how to use ethical datascraping methods and also how to structure the AI models. In the end we have established a basic framework for the movie AI. We also set up the UI by combining the python scripts with jupyters so users can have a more descriptive rundown of our project and a place to communicate with this basic AI demo.
What are the next steps for your project? How can you improve it?
For this project, we limited the scope of our project to just scrape for movie information from letterbox because of the short notice from companies. So I would say the first big step after this project is to improve the data scraper by being able to account for other user prompts such as finding movies based on author, genre or even finding movies similar to ones listed by the user. One other issue that we did not have time to solve was the implementation of our ollama bots being slow due to the limited amount of time we had to train them when we were brainstorming on our project idea. So if we had more time, we would improve our project by either finding better ways to train our existing model or base our project on a new model to better serve customers. We also had an issue with the actual app displacement itself as we used jupyter for the placeholder UI. In the future, we would possibly make a real website/app that would be able to take user requests on a mass scale with our background in AWS cloud computing.