Wednesday, February 1, 2017

About Machinis Ludo

Machinis Ludo is a game jam series run by Prof. Morgan McGuire at Williams College with a satellite location run by Michael Mara at Stanford University and participants around the world over the Internet. A game jam is an intense party at which developers set aside their regular work and create new video games from scratch, working individually or in small groups. 

Working in the McGuire Graphics Lab at
Williams College for Machinis Ludo 2
Some game jams have a physical location and others are entirely virtual, with developers posting to a communual blog to share their work. Ludum Dare and the Global Game Jam are examples of two of the largest virtual jams. Machinis Ludo is a hybrid, with physical sites but half of the participants working remotely.  Everyone uses this blog to share their progress. Look at the summary page from Machinis Ludo 2 to see some examples of what people create in a game jam.

Monday, August 22, 2016

2016 VV-Williams-NVIDIA VR Hackathon Results

The Machinis Ludo VIII game jam was a very special event. We spent a 12-hour day on VR demos and VR prototyping. Due to the large amount of technology and technical knowledge required for the event, we restricted this event to invited staff and students of Vicarious Visions, Williams College, and NVIDIA. Our equipment was provided by those institutions, with additional head mounted displays donated by Oculus and Google.



Wednesday, August 5, 2015

[JL] VR Ocean

 I now have a VR-ocean up and working with real time GPU ray tracing for Oculus Rift using G3D::VRApp. It's very soothing to watch the waves. Open source code and demo will be coming soon after code cleanup.







Tuesday, August 4, 2015

[MM] Victory, with < hour to go


Here we have it, shoddy fully body tracking with an extra glitch effect (an adaptation of https://www.shadertoy.com/view/4t23Rc, applied to things with an alpha channel)!

You can see the calibration isn't quite right when I lean over to shut off the video at the end. Definitely need an automatic calibration method, too hard to get right (spent way too long on it today) and too easy to hit out of whack.

My next steps are definitely:

  1. Integrate an automatic calibration method, for getting the relative transform between the oculus camera and the kinect.
  2. Wire up my 2nd kinect
  3. Try generating a mesh instead of a point cloud.

This was a super fun jam, and I know how I'd approach things differently if I had to do it all over.

A moving light source In a Dark Scene

Since my last post I have worked on a couple of small things to make the hand tracking and visualization a little neater. I also worked to set up the initial experience that led me to work with the leap motion in the first place.

For those that remember my original goal was to create a VR experience where the subject is placed in a dark environment and is able to control with their hands the only light source. Below is a video of what I have done so far. 
In the original example the light source takes the form of a torch that is constantly on. In my experience I made it so that the light source the player controls is in the form of an orb that can be summoned by looking at your right while it is flat and facing upwards. If you tilt your hand or close your hand past a certain amount the light source disappears.

Williams College VRJam Site Pictures

Jamie has an infinite ocean ray tracing in real time and is fixing shading bugs

Dan can now cast a light spell using a hand gesture and is improving the rendering

Morgan finished his jam project and is now adding features to G3D::VRApp

[MMc] Body-space HUD layer working

Iron Man-style 2.5D Body Space HUD
Anyone using VRApp now has access to a virtual 2.0m-wide monitor (the DK2 is so low resolution that you need the virtual monitor to be large). It is in body space, so it is always near you, but you can move your head to see various parts of the screen at higher resolution. Everything is open source and committed to http://g3d.codeplex.com.

[JL] Ocean Waves and Movement

I'm Jamie Lesser, a student at Williams College and a researcher in the Williams College Graphics Lab. I will be joining for the second half of the VR Jam and hope to explore sea motion. Is the visual component of sitting on a bobbing boat on the ocean enough to feel like you are really bobbing? I find this question especially interesting because it requires no movement from the user, and so should be convincing in most settings. The main drawback would be increased motion sickness for those who are sea sensitive.



[MMc] Overnight GUI improvements

I made some changes last night, working from my laptop without a DK2. This verifies that you can use VRApp without the physical hardware installed (my brand-new laptop with Apple's idea of a "fast" GPU can only render at 6 fps, so you wouldn't want a DK2 attached).


[MM] Self sabotage

Instead of hitting "publish" on the last blog post and going to sleep, I stayed up and manually aligned the camera (by flying it around and micro adjusting, nothing even smart), to get the first video of the generated avatar. Ugly as heck (even left a debug coordinate frame in...) but really cool in-person. I can't wait to improve it, but I really do need sleep to make this jam the best it can be.


[MM] New Dev Station, and another step closer

Just an average living room

Monday, August 3, 2015

[MM] Update, point clouds

Lots of hardware issues and some software ones mean this update isn't the most exciting, but it is obviously better than before! I am rendering a point cloud generated from one kinect (which wasn't the one I started with... hardware issues...) stably in world space. Here I did a programmer dance for you:


Next steps:

  1. Turn off the computer
  2. Instead of taking a break, do wire management and figure out a decent setup for capture in my apartment.
  3. Manually align oculus frame and head position in point cloud
  4. Profile and optimize so that we run at 75 Hz.
  5. Make the point cloud flicker in a cooler way.
  6. Claim Victory.
  7. Add automatic calibration between oculus and kinect
  8. Add a second kinect
  9. Add a third kinect
  10. Experiment with other rendering techniques.

[MMc] First successful HUD rendering in VR

Pressing the Tab key now moves the G3D GUI (everything in onGraphics2D) from the debugging mirror display into the VR world. It appears in front of the HMD, floating about 1.5m away from the viewer. The effect is surprisingly cool even with a boring debugging GUI when you're in the HMD--it is augmented reality for a virtual reality world.


[DE] Hacked Leap Motion + DK2

At the end of my last post I said I was going to start working with the leap motion controller in a vr environment. Well I have attached it to the front of a Oculus DK2 using masking tape and have begun working.
After a bit of confusion as to the relative frames of the leap motion controller and Oculus I was able to get get the controller sort of working. Below is a video showing me wiggle both hands in front of the oculus. 

[MMc] First HUD layer success

The red rectangle in the center of the view is a 2D surface directly composited onto the HMD. The key bug was that I was positioning it behind the head before.


It appears that changing the HMD compositing quality may have been affecting the vertical axis direction the rendering rate. I'm not sure how this happened, but for the moment I disabled it to avoid the problem. Now, on to actually rendering a 2D GUI into that red square.


[MMc] Starting on GUI rendering layer

I created an ovrLayerType_QuadHeadLocked layer and initialized it. I'm just rendering it solid red for the moment. Such a layer has a 3D position. From looking at the Oculus samples, it appears that rotation is ignored on such a layer and the translation should have a negative Z value.

Right now the state is not good. Not only does the GUI layer not appear on screen correctly, about 30% of the time the 3D view flips upside down. It is extremely unpleasant when this happens if I'm wearing the HMD. I suspect that this may be an unintended consequence of my automatic effect disabling code based on frame rate.

The right eye also starts flashing cyan (the G3D default background color) every now and then. I think that both visual errors are separate from the GUI work that I've been doing.


[DE] First Leap Motion Results

Because of the extremely helpful instructions on the leap motion site, I have been able to make good progress. I have gotten the leap motion SDK hooked up to my project and have begun experimenting with the hand tracking. While eventually I would like to include a proper hand model, for now I have just been drawing out the joint positions. Below is a photo of my current representation of a hand. The blue spheres represent the joints in the hand, while the green sphere is the center of the palm.

[MMc] Pre- and Post-distortion, motion blur working in VRApp

DebugMirrorMode::POST_DISTORTION

DebugMirrorMode::PRE_DISTORTION

[MM] Kinect/Oculus/G3D playing nicely together

After struggling with annoying include conflicts, redefinition errors, and linkers, we have success #1!

The kinect data is being streamed live to a VR enabled G3D app. Note that the color data is already properly aligned with the depth, thank to borrowing some Kinect code from my labmates who have done extension work on 3D scene reconstruction using the Kinect and other similar RGBD sensors, some of which required highly accurate color alignment (http://www.graphics.stanford.edu/~niessner/papers/2015/5shading/zollhoefer2015shading.pdf).


If you are very observant, you may notice I am not reaching the target framerate for 75Hz. This is worrying, but I won't care until I get a decent point cloud first. Back to work!