MENU

Bieber-net Bieb-emotion Box

0
981
0
Mood lighting driven by delicious Justin Bieber tweets. A raspberry pi 2 periodically grabs a tweet and using a natural language processing API, classifies its sentiment as positive, negative, or neutral, then changes LED colors based on the sentiment and finally reads the tweet using a text-to-speech synthesizer. Users can also manually simulate tweets and select different Twitter users to grab tweets from. GitHub Video (short), Video (long) 4/29/16
Read More ›

GIF Animation-based Wall Lighting

0
1087
0
A scalable, modular set of LED light panels for use in immersive lighting, this platform is able to take any image or animated GIF and “play it.” Further, the user can swipe over a gesture sensor to perspective transform GIFs in real time — one of the many possibilities for image transforms in response to sensor data. The internals are driven by a Raspberryt Pi B+ connected to about 300 RGB LEDs over SPI; code is written in C++ and relies heavily on OpenCV for real-time image processing. Within about a month, I designed and fabricated all panels, soldered and wired up all of them together, and wrote all the software — including implementing my own algorithm to allow modular reconfigurability of panels and making a basic webapp written with Meteor.js to allow for remote control. Video 1, Video 2, Video 3 GitHub ESE 519 Real-time Embedded Systems project 12/11/15
Read More ›

Calmly

0
1024
0
App for managing stress and anxiety in social situations Calmly is an Android app paired with a smartwatch that monitors your heart rate and, based on a calibrated measurement, notifies you on your smartwatch remember to keep it chill whenever the app determines you’re stressed or anxious. Intended to be turned on and used in situations in which you’d normally find yourself stressed or anxious, the hope is to foster better stress and anxiety management in users. Additionally, data is stored in a Parse (NoSQL) database so one can view their trends in heart rate overtime. GitHub Made over a weekend at the PennApps Winter 2016 hackathon. Collaborators: QingXiao Dong, Lydia Li 01/24/16
Read More ›

Robockey

Autonomous mobile robots for a 3v3 robot hockey tournament consisting of 30 teams. Each of the autonomous mobile robots was constructed with laser-cut acrylic and powered by two 600mAh 9V batteries — one for motor power, and one for logic. Each localizes its position similar to how a Wii Remote does from a global reference, and also detects an infrared-emitting puck using phototransistors spread around its bottom. Solenoids were used to shoot the puck. Programming was done on an AVR microcontroller, and PID controls were used to drive the robots. The overall code was organized as a Finite State Machine. My role involved implementing the electronics and contributing to the driver, localization, and control algorithms. Video 1, Video 2 GitHub MEAM 510 Design of Mechatronic Systems final project Collaborators: Nicholas LaBarbera, Sean Reidy 12/12/13  
Read More ›

Face Detection & Replacement

And then there was Obama. The nefarious, Panoptic forces of the Internet had finally won, our lives categorized and remapped into submission unto his Holiness. Histogram of Gradients (HOG) was applied for obtaining face descriptors, placing pixel gradients into orientation bins from 0-180° to generate a histogram. To learn face HOGs, SVM was used to train a model for classification of faces based on 6000+ faces and 170,000+ not-faces for training data. Face replacement was done by detecting facial parts to obtain control points for Thin Plate Spline (TPS) morphing. Then, Poisson blending (third-party code) integrates the desired face into the image. My primary role was writing the HOG algorithm and implementing SVM. GitHub CIS 581 Computer Vision & Computational Photography final project Collaborator: Aayush Gupta 12/21/14
Read More ›

“Ratatat” — Beat Audio-visualizer

0
1010
0
Patatap-inspired beat/sample audio-visualization Using the A-Z keys and spacebar, delightful animations appear on the screen in sync with beats/samples according to each keypress. My role involved implementing the software design and 1/3rd of the keyboard. This project uses openFrameworks, which I learned on my own time and is the first project I used it for. No relation to the band, Ratatat. Video GitHub Collaborator: Jack Kearney 12/14/15
Read More ›

Instaderm

0
950
0
Inspired by the lack of of people of color in dermatology photos used for education and machine learning datasets, Instaderm is an Android app meant to build a database where dermatologists can submit their own “cases” that encourage greater representation. Building upon an existing code base, our group’s focus was on refactoring code, emphasizing good software design and code quality, and continually adding new features such as user profiles in an Agile approach. Besides contributing to refactoring and new features, my role involved bottom-lining Git version control between team members. CIS 573 Software Engineering project (source code belongs to the University of Pennsylvania) Collaborators: Lydia Li, Ryan Vo, William Wang 12/14/15
Read More ›

100% INTERNET でそう (Platformer)

0
1263
1
Collect coins, appreciate true art, and destroy evil. A short game made from the ground-up in Java. Download and play from itch.io CIS 120 Programming & Data Structures project 04/26/15
Read More ›

Face Morphing

0
1016
0
Implementing Delaunay Triangulation and Thin Plate Spine algorithms to blend faces or: How I learned to stop worrying and love the glossy, fabulous, and incomprehensible Internet machine. Delaunay Triangulation involves generating an optimal triangular mesh over an image where the corners of each triangle are at control points (eyes, nose, mouth, etc.) To blend faces, meshes are generated for two face images and averaged into one mesh. The pixels in each triangle of one image are then warped towards the corresponding triangle in the average mesh. Doing this for both images and overlaying them results in a blended face. Using the Thin Plate Spline method involves the same concept, but uses one single transformation for all pixels determined by a spline function. The final result is the ability to blend two faces together into a final face that is easily tweak-able to look more like one face or the other. Video GitHub CIS 581 Computer Vision & Computational Photography project 10/14/14
Read More ›

Canny Edge Detection

Implementing the Canny edge detection algorithm: Introduction to seeing the world as computer does A quintessential computer vision algorithm, I implemented my own Canny edge detector to detect edges within an input image. The algorithm revolves computing pixel gradients across an image via convolution. Afterwards, steps include non-maximum suppression of pixel values and linking edges (involving dynamic programming and hysteresis). GitHub CIS 581 Computer Vision & Computational Photography project Collaborator: Aayush Gupta 09/14/14  
Read More ›