The need for inexpensive 3D display technologies grows as augmented reality applications continue to increase in popularity. Funnel Vision aims to bring 3D light fields to the physical realm using lenticular rendering and a 4K monitor. In realtime, it creates an augmented reality experience that can be viewed from any angle while only incorporating items that can be found in any hardware and electronic store. For this project, I learned how to implement shaders in Unity, allowing the project to run in realtime. You can read more about the project here!
Incredible documentaries, like Chasing Coral (2017) and Blue Planet (2001, 2017), tell the story of climate change and its impact on aquatic ecosystems. Because not many individuals have the opportunity to visit coral reefs (especially for extended periods of time to witness temperature fluctuations and bleaching events), it is difficult to convey the scale and severity of these changes. We believe that providing a tangible, at-scale experience telling the story of coral reef bleaching and climate change could help individuals empathize with the ocean. Additionally, by thoughtfully crafting the narrative of the experience, guests could leave the experience with actionable solutions and a determined attitude for protecting our planet. Science communication and storytelling is crucial to inspire the next generation of ocean explorers and activists.
Projection mapping is a technique used in themed entertainment to project digital content onto surfaces by mapping the digital information to regions on the physical surface. Typically, projections are displayed on a white or light-colored surface. Because bleached coral tends to be white in color, we can build a bleached coral reef set and project onto that surface. The projection can "restore" the coral to vibrant colors and also display data directly onto the coral surface. By calculating the shadows cast from the primary projector, this project's software will dynamically fill areas that need projection by repositioning the second projector appropriately. This will allow us to map and project on complex geometry like coral. You can check for updates on this project here!
In Hiroshi Ishii's Tangible Interfaces class, I had the opportunity to work on my design skills as well as my technical abilities. I focused on the programming, fabrication, and project management for this project. You can read more about the project here!
In my Computational Photography class, we were required to do a final project, and we had the option to come up with our own assignment if we were feeling ambitious. I've always enjoyed looking at low poly art projects, but I've found them too time-consuming and tedious to attempt one myself. That is how I came up with a Low Poly Art Generator for my project. The program I wrote takes in an image file and automatically processes it, returning a triangulated version of the image. While there are other programs that attempt to represent images in Low Poly form, I feel that my program is more sophisticated. Most other programs actually just overlay a triangular mosaic which, when implemented, compromises the integrity of the image. Additionally, the edges and main features of the image are lost.
I wrote a program in C++ that takes in an image file and computes the edges and corner features of the image. My program then uses a Delaunay Triangle algorithm to triangulate a subset of the set of points. The program doesn't use all the computed points because then it would be overly represented and cluttered when converted into triangles. With the triangulated points, the program then fills in each triangle with the color value of the triangle's midpoint. Notice in the first set of images (the scene from UP), that the edges between the balloons and the sky are preserved and are not characteristically jagged like other low poly generators. In the second set of images, you can see where the points were computed for the flower image.
I might try to develop this into a free application for mobile devices because I think the novelty of the program would attract users and it would be fun to try mobile development. I think a more interesting expansion would be to apply this to video (especially live video). There are more considerations to be made, since it would require the triangles to transition smoothly between frames.
Procedural Generation and Parameterization are important concepts in computer graphics. Procedural generation protocols allow animators to efficiently create unique designs based on a set of parameters. In this project, I created an OpenSCAD program that procedurally generates Lego bricks. The parameters are extrapolated to top level inputs that the user specifies and the program automatically returns a solid modeled Lego brick with the desired specifications. The model is to scale with authentically manufactured Legos and you can actually 3D print the pieces and they will snap together.
The parameters that can be altered are length and width (described as the number of studs on top of the brick), thickness, and color. While the program itself is simple, it is a useful tool and was a great learning experience. This summer, I will be designing parametric quadcopters and many of the skills will be transferable to that project.
In the first image, you can see bricks of size 2x2 in various colors. In the second image, you can see the thickness changes between flat and full size. In the third image, you can see that the Lego representations are scalable in both the length and width. In the final image, you can see two of the bricks I 3D printed. Notice that they lock together, just like authentic bricks!
In this project, I worked on voxelizing solid models. A voxel is the three-dimensional equivalent of a pixel. In the assignment, we were asked to represent the given closed models as voxels, and that was fairly easy to implement using ray casting. As a ray crossed through a surface an even number of times (or zero times), the voxels were marked as outside of the object, while if the ray had currently crossed through a surface an odd number of times, it was inside.
In the images above, you can see how each object translated into voxels at 32x32x32 and 64x64x64 voxel resolution.
I took the assignment a step forward, on my own time, to allow objects to not necessarily have closed surfaces. For that implementation, I increased the number of rays being cast and took the most frequent value for each voxel for the representation. As you can see in the last image, as you increase the number of rays, the more accurately the voxel representation approximates the missing edge of the octahedron.
For my final graphics project, I created a parametric hair simulator. I decided to work on this project because it's relevant in the animation industry and there are many direction extensions to this project can take. My simulator creates a hair object and then simulates the hairs by calculating the external forces, applying the forces to each hair vertex, and then updating the position of each hair vertex. Forces were gravity and wind (which was controlled by moving the cursor around in the UI). Next steps for this project include adding shadow maps and transparency and implementing some form of artistic control.
I constructed a hierarchical character model that can be interactively controlled with a user interface. I implemented skeletal subspace deformation (SSD), a method for attaching a skin to a hierarchical skeleton which naturally deforms when the skeleton's joint angles are manipulated. Check out my reel (up at the top) to see a video of this in real-time action.
Physical simulation is used in movies and video games to animate a variety of phenomena: explosions, car crashes, water, cloth, and so on. Such animations are fairly difficult to keyframe, but relatively easy to simulate given the physical laws that govern their motion. I used springs to build a visually appealing simulation of cloth, as seen in my reel. There is also additional simulations of circular motion, a simple pendulum and a particle chain. I used Euler, trapezodial, and RK4 integrators in my implementation.
First, I implemented a ray caster. A ray caster sends a ray for each pixel and intersects it with all the objects in the scene. My ray caster supports perspective cameras as well as several primitives (spheres, planes, triangles, and bunnies). I also implemented support phong shading and texture mapping.
Next, I improved the rendering capabilities of my ray caster by adding several new features. I improved the shading model by recursively generating rays to create reflections, refraction, and shadows. Then, I added procedural solid texturing. Finally, I implemented jittering and supersampling to fix aliasing problems.
I started a YouTube Channel this year called G1RLC0DE. The target audience is Middle School and High School students who are interested in learning computer science in an engaging, relevant way. The first video is about how if statements work in the context of online quizzes like the ones on buzzfeed.
I was totally inspired by an exhibit at the Exploratium in San Francisco and decided to take stab at a DIY version! The box was manufactured by a friend and I hand rolled pieces of Mylar (think super shiny paper) that would then reflect light in a very appealing way. Here are the pictures of my light box backlit by various colors.
Photoshop is strange. While I was working on updating my website, I overloaded photoshop with 3D files and it crashed. When I opened it back up, I discovered it had glitched, but the images it now produced when I pressed undo were very visually interesting. Here are some of my favorites!