RedShark News contributor Rakesh Malik reminds us that there's more to NAB than cameras, highlighting some technology demos that caught his eye.
There were several interesting technology demos at NAB 2015 hosted by small startup companies in small booths with very little flash. Some of them didn't get that much attention, tucked away among the multitude of small and generic looking booths with very little fanfare.
In no particular order, here are some highlights:
ZLense has a clever system that uses an infrared imager to create a live depth map of the scene in front of the camera and feeds that depth map to a real-time 3D application. Using a live chromakey and combining it with the live depth map, the software can, in real-time, integrate 3D elements with the scene. Bruno Gyorgy, President of zLense, demonstrated by walking in front of his camera, while his colleague loaded up a couple of 3D models. As Bruno walked around his green screen, his image intersected the models on screen pretty realistically. The models weren't sophisticated or photo-realistic, but they were enough to show Bruno's image interacting with them.
The hardware component mounts onto a lens like a mattebox. It has several IR emitters and two IR sensors on it and measures the arrival time of the reflections from the subject to determine its distance from the lens. It has a cable connected to the computer to transmit the live depth map to the software.
Bruno said that he is targeting broadcast studios with this product, which costs around $100,000 for a license including the software. The system can support multiple cameras with just one Zlense device, enabling it to scale nicely and cost-effectively. ZLense is offering rentals to enable smaller studios to have access to this technology.
He plans on eventually improving the fidelity of the depth map enough to accomplish an effective chroma key effect without a green screen.
Coolux was showing a projection mapping system with various methods for updating the projection map. The first was projecting a set of lines onto a regular geometric shape consisting of a pattern of stacked cubes. The presenter updated a set of control points in the software and the projector re-aligned the image to the cubes.
The second demo was more automated. The target this time was a silk screen with fiber optic cables embedded into it near the corners. For this demo, the presenter clicked an "update" button in the software and the software projected a series of patterns into the space in front of it, using the fiber optic cables to determine where the corners were, and automatically updated the image, projecting it almost perfectly onto the silk screen.
The third demo was a video of a similar system, except that instead of using fiber optic cables embedded into the fabric, it had an RFID tracker attached to the back of each corner and a set of RFID detectors in the grid above the presenter. The image tracked the screen with remarkable precision while the person holding it twirled and swung it around erratically.
Vision 3D has turned a simple concept into a simple but effective product. They use a scanning adapter, which basically oscillates the lens around its axis with controllable speed and amplitude, while the camera is running. It takes advantage of the fact that human visual processing naturally handles small changes in parallax by smoothing them out; as long as the amplitude of the oscillation is small enough, the recorded image appears stable. However, because the location of the lens is changing slightly, it is recording a small amount of parallax as it moves. The result is a subtle increase in the apparent depth of an image and richer texture detail. It enhances the feeling of depth even in stereoscopic 3D films.
Stabilizers are getting bigger AND smaller. DJI had a new Ronin M, which is a small Ronin. Letus had absolutely the cutest little stabilizer, the Helix Jr, which was so small and light that I nearly tossed it into Sony's palace by accident when I first picked one up. It's powerful enough to support a RED Epic, though with no room to spare. That combo is light enough that I could lift the whole thing with one hand, but it handled the Epic like it was nothing, a testament to both the design of the Helix and the strength of its custom motors.
Letus also had an ARRI Alexa on a Double Helix. THAT was a beast. It stabilized the Alexa effortlessly, but it was also heavy enough that I wouldn't recommend it without some form of support rig, at least if you plan on using it on a shot longer than 2-3 minutes (and that's if your operator is quite athletic).
FreeFly Systems had a suite of stabilizers of various sizes, in addition to its drones. Rather than a baby Movi though, there was a brilliant kinetic controller, where you'd hand the controller to the 2nd operator. All that operator would have to do to control the gimbal's direction is just turn the device until he or she had the desired frame in the display mounted on the controller. First-time users tended to start walking around to follow the action before remembering that someone else was carrying the camera. It was incredibly easy to use and can control any gimbal in their lineup. FreeFly Systems also had a monstrously large drone with a load capacity of over a 100 pounds.
Defy had several of their gimbals on display, but their most interesting products were the Dactylcam and Dactylcam Lite. Both based on the same idea, they sell you the rig with a large length of cable and a remote. You can mount a camera directly on it or with a gimbal. The bigger one has a payload capacity of 50 pounds, the smaller one 10. Specialized to be sure, but potentially very useful, especially for doing high angle tracking shots where you need to capture dialog.
There were some EasyRigs to try out, as well as an unusual looking exoskeleton for supporting a gimbal, that resembled an extra pair of arms. It folded up like wings to pack away.
Streaming video live over wireless networks was a big deal at NAB this year. JVC/Kenwood's new 4K camera took the cake here, with its integrated WiFi live-streaming hardware. One of the users mentioned that he took two cell phones with him, figured out which one was giving him better data bandwidth, and simply picked that network on the camera. The camera took care of the rest. JVC/Kenwood's reps also said that they'd set up private WiFi networks in cities that lacked performant cellular coverage. Being able to stream live video over WiFi is a great asset for news applications, and JVC/Kenwood has a great solution in this arena.
Sony's marketing manager Peter Crithary showed me Sony's solution; it's a box that you connect to any of Sony's cameras and it simply streams a proxy version of the video feed from the camera to Sony's cloud. While the latency would be too high for an AC to use it, in a place with a good WiFi network, it would be possible for a director to monitor takes via the cloud with this. The box itself is fairly small, so it doesn't add much weight or bulk to the camera, not that you'd notice if you were shooting with something like an F65.
RED had a solution as well, but due to time constraints and the fact that RED's booth was almost as crowded as Blackmagic's, I didn't have time to talk to them in detail about their solution. It is a box that you can add to a RED camera like any other module, though it seemed fairly bulky compared to the Sony and JVC/Kenwood solutions. Of course, JVC/Kenwood's solution is integrated and the camera is relatively compact. For $4500, it's probably the most cost effective 4K solution for live broadcast, though I don't know how the Sony and RED boxes cost. Sony's solution might work with other cameras as well.