Revive Hero
Mobile App • Hackathon Project

Revive

AI-powered recycling assistant using image recognition.

Year

2019

Role

Full Stack Development

Context

CUNY Hackathon

Inspiration

"New York City has no landfills or incinerators, yet residents produce 12,000 tons of waste every day. What happens when you throw something away?"

We would want people to answer "give it to ReViVe" because as people say "one man's trash is another man's treasure". In a more serious note, we are well aware of the problems garbage collection is creating, not only in the environment but also in politics. Recently China declared they did not want to take our garbage anymore, so now where is it going to go? We would like recyclabe garbage to go to centers where they can be reused. Our app name is literally what we want to do for the Earth, help us Revive It!

How It Works

  1. Take a picture
  2. Find nearby center
  3. Take action!

Three steps are all that separates YOU from recycling unwanted items. Our application is a cross-platform app that will enable users to find nearby recycling centers that specifically accept the item users take pictures of.


First, we take a picture that is sent to Google Vision API to recognize what it is.


From there we show the user the results within the app and request any adjustments to be made. Users are able to take pictures of multiple items (different types possible) and are automatically saved in a list of To Recycle Items.


When the user is ready to recycle**, the app will ask your location and map out the nearest recycle centers that accept your items. These locations are found using external API's, including Earth911 and NYC Open Data The information can be filtered (display only category X, display all).


Depending to the dataset used for a specific item, the app will provide detailed information about the recycling center, such as name, address, number, email, categories accepted.


Lastly, it will be up to you to take ACTION! and either call them or drop it off.

Challenges

  • Data fetched from an API was JSON-fyed but some nested structures were too many levels deep
  • First time using Google Vision API and Google Cloud Services
  • Troubleshooted the React camera library because it was not letting any of our phones take pictures

Accomplishments

  • Enabling image cache storage using React Native
  • Implementing Machine Learning to successfully categorize items
  • Giving and receiving help throughout the event

What We Learned

First of all we were all able to learn React Native, Google Vision API, Expo, and other APIs. But we also learned about teamwork - each one of us is from a different CUNY school and yet somehow we were able to work together for 2 days, everyone helping each other out. What we truly learned is that working together great things can be achieved!

Built With

React NativeGoogle Vision APIGoogle Cloud ServicesExpoEARTH911 API

Learn More

View on Devpost