Building Your Own Home Video Recording & Production Studio: Behind The Scenes Of Our New GDELT Tutorials!

With the third video in our new GDELT Tutorial series launching this past weekend, we now have tutorials for Television Explorer: The Basics Of Television News Search, Interactive Sentiment Mining With BigQuery and Using Carto’s BigQuery Connector To Seamlessly Map Covid-19 News.

What does it look like to rapidly launch a small video production studio designed to fit in an extremely small space under a tight timeline? What do you need to think about for cameras, microphones, lights, backgrounds and the myriad supporting cables and equipment? What about the post editing environment? Read on to learn how we created the GDELT Tutorials series!

CAMERA

At the center of any video series is the camera. People just starting out with their first video series often fixate on the technical characteristics of the camera’s sensor, especially its optical resolution, without understanding the importance of sensor size, lens assembly and optical processing pipeline.

From Web Cameras To DSLRs

A modern Mac Air has a 720p camera built-in, but its pinhole optics means the resulting visuals are extremely poor and reduced even further over video conferencing. Even with intense front and ambient lighting, it is extremely difficult to yield adequate visual quality. Moreover, the camera’s low vertical position and upwards angle makes for unflattering imagery (though in a pinch you can simply place the laptop on a pile of books to bring it eye level). Despite intense front and ambient lighting, the visual quality of my talk given over a 2019 Mac Air is less than ideal.

Upgrading to a dedicated standalone webcam, the pinhole lenses of most offerings means they yield only minimal quality improvement, even though they can offer in many cases 1080p or even 4K resolution like the Logitech Brio. For example, compare Azeem’s Canon EOS M6 DSLR on the left to my Brio 4K on the right. Despite its 4K capabilities, the Brio’s pinhole lens means it yields a soft ill-defined image in which both foreground and background are equally in focus (no bokeh effect).

Contrast these two previous videos with the crispness, rich vivid detail, depth and lighting rendition of my recent Google Cloud DevRel talk or any of the three GDELT Tutorial videos to see just how stunningly different the results look from a DSLR.

While DSLR cameras are most associated with their still photography legacy, today they have also found widespread use as video cameras, providing significant cost savings compared to dedicated video cameras with superior optics for the price in an affordable price range.

Using a DSLR, you simply mount your camera on a tripod, push the record button and film your video!

The Canon SL3

Which DSLR camera should you choose? A lot of it depends on the specific features you need, whether you already have lenses for a given model series from previous camera ownership and just your preference for the visual look and feel of its imagery. Search on YouTube to see video shot by each camera to see which you prefer.

Our own videos and streaming are done with a Canon SL3 (we use the body+lens “kit” version). It is one of the cheapest and most accessible of Canon’s EOS Rebel series that still supports the majority of its high-end features, including offering 4K recording. While its 4K frame is significantly cropped, requiring significant distance from the camera to the subject to maintain the same subject area as 1080p shooting, it ensures the camera is upgrade proof as 4K becomes more streamable. There is a tradeoff in the available framerates across the supported resolutions (only 24fps is supported for 4K and only 30 and 60fps for 1080p), but for most applications this won’t be an issue.

Another benefit of the Canon SL3 is that the rear monitor flips horizontally 180 degrees and flips vertically to face forward. Thus, when the camera is mounted on a tripod facing you, you can the exact camera framing on the monitor, along with all of the key vitals, including a realtime RGB spectral graph.

The most basic configuration involves simply mounting the camera on any standard tripod. Even a generic AmazonBasics tripod will work and the camera itself weighs little. When selecting a tripod, keep an eye on its total footprint when the feet are extended if you are working within a constrained space. Ensure that the tripod is tall enough to position the camera at face height of the subject, with the ability to extend several inches above that to offer flexibility for the future in case you decide to make changes to the shooting layout that necessitates changing the camera’s height.

Memory Cards

For those wishing to simply produce prerecorded videos, all major cameras like the SL3 support recording directly to the camera’s memory card. Ensure that your memory card supports the necessary transfer speed to shoot the resolution of video you plan to shoot, such as 4K, and that it has sufficient capacity for long shoots. As long as the camera supports it, you can also select a card faster than what the camera can use in order to maximize transfer speed when copying files to a computer later on using a memory card reader. The SL3 supports UHS-I cards, so cards like the SanDisk Extreme Pro SDXC UHS-I work well. Keep in mind, however, that you’ll be limited to 30-minute recordings, as explained in a moment.

Lens

In addition to selecting the camera body, you also have to select what lens(es) to use with it. In our case, the SL3 kit lens has proved exceptionally adept for our needs, though those shooting close-up 4K may need a wider-angle lens due to the camera’s 4K crop factor and depending on the size of your environment and other needs you may need to change your lens. However, for most home recording applications, the kit lens should be more than adequate.

Power Supply

Few DSLR cameras come with a power supply and the SL3 is no exception. While you can shoot video using only the battery, it is extremely cumbersome to switch out the battery midway through a shoot. Instead, all manufacturers offer a plugin power adapter for their camera, though most require you to purchase it separately. Note that the specific power supply typically differs across model lines, with even two very similar looking cameras may use different power supplies, so make sure to consult the manual for your camera. In the case of the SL3, the adapter consists of a power transformer and a shell battery that inserts into the camera like an ordinary battery, but with a cord that exits the battery compartment through a rubber stopper that removes from the side of the compartment.

White Balance & Autofocus

If you notice that you appear off-color from what you would expect, such as being overly orange or blue, it means the camera’s automatic white balance is struggling to correctly assess the scene. This offer occurs at the most unexpected times, such as shifting slightly in your chair that causes a portion of the background to become more visible and suddenly shifting the coloration of the video mid-shoot. Most DSLRs like the SL3 include a manual white balance feature in which you can manually adjust the white balance until the scene looks correct to you. This has the added benefit of fixing the white balance so it doesn’t change while shooting. You can also leave the coloration to be corrected in post, though particularly bad color rendition may be difficult to fully correct.

Autofocus should work well in most video modes, though in 4K some cameras like the SL3 use a less responsive autofocus mechanism. While modern autofocus is extremely accurate, you may find in some edge cases, especially with poor lighting, that the camera’s focus may drift. You may find that adjusting your background or centering or tweaking your lighting helps. Otherwise you can switch to manual lens focusing, but keep in mind that depending on the distance of the camera to the subject and the lens being used, the subject may have to keep within a very narrow range of distance from the camera to remain in focus when using manual focus.

Maximum Recording Time

One key consideration to keep in mind is that nearly all DSLRs, including the SL3, enforce a maximum recording time of 30 minutes. Once you’ve been recording for 30 minutes, the camera will stop and you will have to simply hit record again and begin a second recording file. You can repeat this process as many times as your memory card has room for.

This means that even though a 128GB memory card has room for 2 hours and 21 minutes of 4K footage or 9 hours and 23 minutes of 1080p30 footage on the SL3, you will have to break your filming up into 30-minute segments when recording directly to the memory card.

To eliminate this recording time limit, the camera’s HDMI output comes to the rescue and allows for live streaming to any application!

HDMI Capture: Live Streaming & Computer Recording

Nearly all DSLR cameras today offer HDMI video output, meaning they can live-output what the camera sees to any HDMI input device like a standalone video recorder, or, most importantly, a computer using a capture card! When using HDMI output rather than recording to the memory card, the 30-minute time limit is removed, allowing you to film essentially indefinitely. We’ve personally filmed for up to 6 hours at a time nonstop with the SL3 without issue.

In this configuration, you simply plug an HDMI cable into the side of the camera and plug the other end into a capture card connected to your computer. Just search Amazon for “USB video capture” to see myriad small USB dongles that are all more or less clones of one another. Keep in mind that many advertise “4K video support” but only as the input signal – the recorded signal is still 1080p30 or 1080p60. The important figure here is the “capture” or “record” resolution which is the actual resolution it outputs to the computer. Some capture devices will accept a 4K input signal, but downgrade it to 1080p for actual recording.  Also make sure that the capture device handles the audio signal as well, as some devices do not. Most 1080p USB capture devices can be found in the range of $30-50. Higher priced ones do exist that support 4K capture and PCI expansion cards or dedicated capture devices often support multiple 4K streams in parallel and even 4K60.

Most generic USB capture devices take the form of a small USB dongle about an inch long and require no external power. They simply have a USB plug on one side that plugs into the side of your computer and an HDMI port on the other side into which you plug the HDMI cable from your camera. That’s quite literally all there is to it – the computer magically sees your camera as a generic webcam.

To use, you simply turn your camera to video mode (on the SL3 that means turning the power switch to movie mode) and plug the HDMI cable from the camera into the capture device and the capture device into your USB port! The camera appears as a standard camera device, meaning every piece of software on your computer, from your web browser to Zoom, Skype, Teams, Hangouts, Meet, WebEx, etc all instantly see your new camera without any issues, drivers, configuration, etc.

Note that many cameras like the SL3 actually use a “mini HDMI” port so you’ll need a “mini HDMI to HDMI” cable to connect it to your computer.

Inactivity Timeouts & “Clean HDMI”

The two most important features to look for in a DSLR to be used in this way is the camera timeout feature and something called “clean HDMI.”

Even in HDMI mode, some cameras automatically turn off after a period of inactivity where the user has not pressed any buttons on the camera. Make sure your camera has the ability to turn off this “inactivity timeout” like the SL3. Note that on the SL3 you will have to turn the camera mode to Manual to access many of these advanced features. Not all cameras allow you to turn off this timeout feature, so ensure yours does.

The most important requirement of all for HDMI use is that the camera supports so-called “clean HDMI.” Many lower-priced cameras output exactly what you see in the viewfinder over their HDMI signal, meaning the video coming into your computers includes that computer-generated overlay with the focus points and all of the shooting status information, making it unsuitable for use. Cameras like the SL3 that support “clean HDMI” have an option to turn off that overlay when feeding an HDMI signal, meaning their HDMI video signal contains only the video itself, exactly as if it was being fed from a traditional video camera. Cameras that support this feature typically clearly denote its presence in their documentation. Note that the SL3 disables most menu features when outputting clean HDMI, so you have to temporarily disable it to make certain configuration changes on the camera. Also note that the SL3 locks out most menu features when an HDMI cable is plugged in, so you have to unplug it to make configuration changes.

Video Conferencing & Live Streaming

To use your new DSLR video camera for web conferencing and live streaming, there is nothing else you have to do. Just select the USB capture device from the camera dropdown in your conferencing software and that’s it!

Note that the camera and capture device can both get relatively warm after extended use and in an extremely hot environment such as outdoors it is possible for the camera to automatically turn off with an over temperature error, but this is exceptionally rare. Your computer will also get quite warm, with smaller laptops like MacBook Airs getting extremely warm at the top from the processing required for the video.

We’ve continuously streamed for up to 6 hours uninterrupted during lengthy half-day events without any issue, so as long as you are indoors in an air-conditioned space, you will be unlikely to run into any issues.

Recording To Your Computer

If you want to record a tutorial to your computer, there are myriad software packages available, free and paid. For a free recorder, just use your MacBook Air’s built-in QuickTime recorder. Launch QuickTime Player and select “Record Movie.” You can click on the settings dropdown beside the recording button to select the video and audio inputs and the recording quality (set to “Maximum” to record at full 1080p as otherwise it will typically default to 720p). Note that by default QuickTime may use a ProRes codec that is not supported by all video editing software – to fix this, when you have finished a recording, choose “Export” from the menu and export to a 1080p file which will generate a file readable by most editing software including Premiere Elements.

For tutorials, you will typically launch two recordings, one for the video and one for the screen. Just click “Record Movie” and begin and then “Record Screen Capture” and begin and both will record simultaneously. When you are done, just switch back to the QuickTime Player to end the movie and click on the upper right to find the QuickTime Screen Capture stop button.

The Audio Delay & Lip Sync Failure

When you record your first video, you’ll notice that your audio is terribly out of sync with your lips with as much as a quarter second delay between the word appearing and your lips forming the word. This is due to the fact that audio capture is typically minimal latency (though this depends on your buffer size), while video capture has a high degree of latency as the camera converts the signal to HDMI, the capture device ingests it into the computer and the computer’s CPU processes it. The end result is that audio passes through to the computer within tens to hundreds of milliseconds whereas the video has a much greater delay.

Gamers frequently correct for this in software using tools like OBS’ audio delay setting, but this adds additional complexity and requires additional CPU overhead and can introduce nonlinear sync loss at times of CPU overload.

Instead, a better alternative is to feed your audio stream directly to your camera and use the camera’s HDMI audio feed as your computer’s microphone source, as we describe in the next section. This has the added benefit that it ensures audio is captured when recording directly to the camera’s memory card when not using HDMI output.

Fixing The Audio Delay Using The Camera As Audio Source

Most DSLR cameras like the SL3 have a microphone input, typically a 3.5mm phono jack. This jack supports many kinds of analog mics, though you may need additional hardware for mics with specialized requirements like phantom power. However, many run-of-the-mill mics with 3.5mm jacks should be capable of being directly plugged into the camera (though always check their specifications first before plugging anything into your camera that could damage it – NEVER simply plug something in just because it has the correct plug).

While you will need to consult the specifications of your specific microphone, most common microphones with a 3.5mm plug can be connected directly to the camera (again – check first). In fact, many are actually designed for use with DSLRs. In some cases, XLR outputs may be connected using an XLR-3.5mm adapter. Keep in mind, however, that specialty microphones, such as those requiring additional power, may require additional hardware.

What about USB mics? Most USB mics typically provide an earphone jack for direct monitoring, allowing the speaker to hear their voice without the slight delay introduced by the computer. You can connect this headphone jack directly to the mic input on your camera using any standard 3.5mm to 3.5mm male-to-male audio cable. Make sure to use a traditional 3-conductor cable, not a 4-conductor microphone cable, since those are designed with an extra wire for the microphone to separate it from the R/L audio channels versus in this case the audio channels are what we are capturing.

Note that in our specific environment, due to the proximity of powerful electronic equipment and their corresponding power supplies between the microphone and camera and inability to route the audio cable sufficiently distant from those supplies in our environment, we had a slight hum and noise induced onto the line. Upgrading to an AudioQuest Tower cable eliminated this induced noise, but in most cases any reasonably shielded audio cable should be sufficient.

On your camera, you’ll need to ensure your SL3 is in Manual mode and configure the audio settings to Manual and turn off the Wind Filter and Attenuator. Then set the Recording Level all the way to the left to 0 and you should see no movement of the levels when you speak. Set it just one setting to the right to “1” and you should see the levels move when you speak.

Now adjust your microphone’s Gain and Headphone Volume levels until you see the camera levels hovering between -12db and -10db or so for strong speaking, going up to no higher than -6db for your loudest most energetic speech, with the levels settling to no sound registering when you are quiet. On a Blue Yeti USB mic this typically involves setting the Headphone Volume to around 30% and Gain to around 25% when feeding an SL3.

Remember that USB microphones will still need to be plugged into your computer even if you aren’t using them as a microphone source on the computer, since they draw their power from the computer.

Then switch the audio of your streaming or recording software to your USB camera device and that’s it! Your audio will now be perfectly lip synced to your video.

When using a USB microphone’s earphone jack, remember that it is designed for monitoring, not high-fidelity reproduction. Depending on the microphone design, you are going from the analog condenser to a ADC conversion, then back through a DAC conversion to the headphone jack (though some mics may bypass the ADC->DAC), then to the camera through another ADC conversion, then through the camera’s processing pipeline, then out through the camera’s HDMI pipeline, through the USB capture device and finally through whatever compression your capture software is using (if any). That’s a lot of steps that can degrade the quality of your audio.

For our tutorial videos, we set the video capture to use the camera analog audio and the screen capture to use the USB microphone’s native digital audio, which is higher quality. This allows us to switch to the higher-quality audio during the screen capture portion of the video or to switch entirely to the USB audio in post by aligning the two tracks and replacing the camera audio with the native digital audio.

For live streaming, however, this workflow entirely eliminates the audio delay without any additional complexity or steps.

For those shooting in the field with XLR microphones, you might also consider feeding a Zoom or Tascam or equivalent high-quality recorder and feeding its microphone out to the camera to use as a sync for aligning and replacing in post.

LIGHTING AND BACKGROUNDS

Even the best camera will struggle in poor lighting, while a bad background can distract from your presentation. How can you optimize these?

Lighting

When considering lighting, a lot comes down to the kind of “mood” you’re hoping to set for your videos. If you are targeting an entertainment-minded audience, darker cinematic lighting with shadowing might create a better tone for your videos versus a well-lit subject. Your room size also dictates a lot of your lighting considerations, such as whether you are just lighting the subject in a tight shot versus lighting design for an entire room for wider shots.

The best advice here is to watch a variety of YouTube videos that have the kind of character you like and are aimed at your target audience to understand the kind of lighting they use. There are also myriad YouTube tutorials on lighting any given kind of room.

In our case, we choose to flood the subject with strong lighting across the front and from above to create a bright evenly lit subject since our goal is have the subject take the backseat to our content. We use twin Neewer dimmable light panels which feature interleaved rows of 3200K warm LEDs and 5600K cold LEDs with independent brightness controls, complete with tripods, with a reasonable CRI of 96 and 3360Lux max output, making them bright enough to double for larger-area shoots. In our case we have them flanking the camera about 3 feet from the subject and set to their lowest settings to flood the subject from the front with even lighting.

We also have twin 5-bulb hydra floor standing lights flanking the light panels flooding the ceiling with a total of ten 5600K 850lm bulbs to create an even ambient glow in the room. The bulbs are aimed to focus most of the glow in a box around the subject on the ceiling to reflect down an even top lighting aura.

In place of the hydras, softboxes are a more traditional form of lighting, but require considerably more space. In our case, our studio must occupy minimal floor space and be permanently installed with no teardown/setup, making the hydras and light panels superbly suited for their minimal space requirements and allowing them to be configured to wrap around an existing desk and remain permanently installed.

Background

The background of your videos is one of the most important elements of creating the “character” of your videos. This is an extremely personal choice with few hard and fast rules other than ensuring that there is no visual imbalance, such as a large amount of detail to your right and an empty wall on your left or small detail or lettering that can create a moiré effect on camera. If you are submitting your videos to outside publishers, check their requirements, as some ban background content containing logos, books with recognizable titles, etc. For those with a plain white wall, you might try simply placing colored LED lights on the floor aimed up as a simple trick or removable wallpaper or simply placing a few small bookcases behind you if space permits.

In short, the background of your videos should reflect your personal brand and visual identity and the mood and “character” you are trying to project and is deeply personal.

One technical consideration is the amount of space you have. In our case, there is less than 3 feet between the subject and the wall, with 2.5 of those feet comprising the hallway that forms the sole ingress/egress space to a key area, so there were few options to position anything behind the subject on a permanent basis.

We also wanted to sidestep for now the process of establishing a visual identity for the videos and create greater flexibility and also focus viewers on the subject at hand, rather than distract them with a distinctive backdrop.

For that reason, we chose to go with a pure black monochrome photographic backdrop. We use two versions, one is a Lastolite collapsible background measuring 6×7’ that can be used for wide shots and the other is an Angler PortaScreen roll up/down self-supporting backdrop that measures 6.7 x 5’ and is used for most of our videos. The PortaScreen is particularly portable, folding up to a small case and requiring nothing more for use than simply unlatching the cover and pulling the screen upwards, supported internally with a pneumatic X-brace structure. While it can be temporarily stored vertically, the manufacturer notes that it should ideally be stored horizontally.

MICROPHONE

In many ways your selection of microphone is almost more important than the camera. A poor-quality video can still convey information if the audio is understandable. On the other hand, beautiful and vivid imagery with unintelligible audio is useless for instruction and tutorials. How does one pick an ideal microphone?

Deciding Between USB Versus Analog Microphones

The very first decision you have to make is whether to purchase a USB microphone or an analog microphone. USB microphones are all-in-one solutions that you simply take out of the box, connect to your computer and you’re ready to go. They handle the conversion of the analog sound to the digital signal needed by the computer entirely inside the microphone. Analog microphones on the other hand simply output an analog signal and require a separate device to digitize the sound and connect it to the computer. Smaller microphones like lavalier/lapel mics and DSLR-optimized boom mics typically have 3.5mm jacks, while most other microphones have XLR jacks.

The difference is that a USB microphone is an all-in-one solution that doesn’t require anything else, while analog microphones require additional hardware and introduce complexity to your setup. On the other hand, USB microphones sacrifice audio quality for convenience, while analog microphones are infinitely customizable and offer the greatest audio quality.

One point worth noting is that USB microphones are almost exclusively designed to be placed within 6 to a foot of the speaker. This works great when seated at a table, but if you are planning to sit in the middle of a room with the microphone hanging 4 feet above you, most USB microphones are not going to be suitable.

In the end, if you are just starting off and aren’t looking to do much more than dramatically improve your video conferences, live stream and record tutorial videos and podcasts and want the minimum of technical complexity and/or need something highly portable and you are sitting right in front of the microphone, USB microphones are likely the best fit for you. On the other hand, if you need the highest audio quality, the ability to feed multiple input signals like multiple speakers on a podcast or flexibility for the future, an analog setup is more expensive but may be your best bet.

USB Microphones: The “Just Works” Microphone

For those just getting started with creating their first video or podcasts, two popular choices are the Blue Yeti and AT2020USB+ USB microphones. Both are cardioid condenser microphones, though the Blue Yeti also supports several other patterns for interview-style multiple speakers, whole room capture, etc.

Most importantly, USB microphones connect directly to your computer and don’t require any additional hardware of any kind, not even a power supply and have no drivers or software to install. They quite literally have a USB jack on the bottom of the microphone that connects directly to the USB port on your computer and the moment you plug them in they are instantly recognized as a computer microphone and you can begin using them. All video conferencing and recording software recognizes them as-is, there are no drivers to install and they draw all of their power from the computer.

We use a Blue Yeti for our video and have been extremely impressed with its quality. We ultimately needed a clip-on pop filter since it is more sensitive to plosives than some microphones, but a basic pop filter has largely eliminated the issue.

USB microphones are extremely simple to use and for the vast majority of use cases will be the best choice for someone just getting started that doesn’t want a lot of complexity and is primarily doing spoken word tutorials or where price is a driving factor. They typically can be addressed from around 6 inches away, meaning they can be safely offscreen out of the view of the camera and still yield high-quality audio. Their response characteristics are typically tuned for spoken word vocal recording meaning they yield excellent results for tutorials and other videos, though may not be ideally suited for those seeking to record music.

They are also eminently affordable for their quality. Many like the Blue Yeti even come with a built-in stand, meaning you simply take it out of the box, put it on your desk and plug it into your computer to have an instant high-quality podcasting and video recording solution.

On the other hand, the ADC hardware (see below) in most USB microphones makes their resulting audio colder and crisper than analog microphones. Compare the audio of a Blue Yeti against a Shure SM7B coupled with an Apollo Twin USB and the difference is readily apparent.

Keep in mind, however, that USB microphones typically require the speaker to be seated in front of them within a foot or less of the microphone and are not usually suited to be positioned several feet above or in front of the user, such as when shooting a user standing in the middle of a room.

In the end, for most users seated at a table, USB microphones yield audio that is exceptionally well suited for their needs and are eminently affordable.

Analog Microphones & ADCs

For those needing the best possible audio quality or needing to place the microphone far away from the user or clipped to their lapel, they will eventually upgrade to an analog microphone. That means that you’ll need something to convert their analog signal to the digital signal needed by a computer, called Analog to Digital Conversion (ADC). You can simply rely on your camera’s built-in conversion using its analog microphone jack, but for the highest quality audio or to connect directly to your computer without your camera you will need an external converter.

Many kinds of lavalier and DSLR-optimized boom microphones can be directly connected to your DSLR camera without any further equipment needed, relying on its ADC, making them as easy to use as a USB microphone. However, the need to connect through the camera means that if you want to record audio only, such as a podcast, you will still need to turn your camera on even if you don’t capture the video feed. You can also connect XLR microphones through mixers or other hardware directly to the camera in this way. It is important to remember, however, that you’ll be relying entirely on the camera’s ADC system under this model.

For the greatest flexibility and highest audio quality, you’ll want a standalone ADC that converts the analog microphone signal to the digital signal the computer needs.

There are an almost infinite variety of units available today, with popular ones including the Scarlett 2i2 at the more affordable range on through more expensive units like the Apollo Twin USB and up. Like microphones, ADCs have very distinctive sound profiles meaning that recording the same microphone through different units can yield drastically different sounding audio.

For those needing to integrate multiple channels, many mixing boards like the Yamaha MG10XU, the Behringer Xenyx series and the Mackie ProFXv3 among myriad others all offer built-in ADC with USB output directly to the computer. Many actually offer bidirectional USB support, both outputting audio to the computer to be recorded and streamed and appearing as microphone inputs to the computer, allowing the output of multiple applications to be streamed in parallel as different channels to the board to the mixed, adjusted and streamed back to the streaming/recording software.

There is no single “right” answer to the “best” ADC. A lot comes down to your specific needs. Do you just need to connect a single XLR mic to your computer or do you need the flexibility to feed multiple microphone inputs? What about the ability to mix those inputs on-the-fly?

The best choice is to search YouTube for videos showcasing various ADC units paired with the microphone you are considering to hear them for yourself and pick the one you think sounds the best. Most have the same basic technical characteristics on paper, with the real difference being in how they sound.

Do keep in mind that some microphones may require phantom power, while guitar pickups require Hi-Z support. Some units support the very high gain required of microphones like the SM7B, while others require inline gain like a Cloudlifter. Thus, there are very real technical characteristics of each ADC that you must align with your own needs, but typically there will be many units meeting most needs so in the end the real decision comes down to which sound you prefer and your price range.

In the end, for most studio microphones, you will end up with one of two configurations: standalone ADC or mixer.

In the most basic configuration, your analog microphone will be connected by an XLR cable directly to an ADC unit, perhaps with an inline gain booster and the ADC unit will be plugged directly into your computer, with its headphone monitoring jack connected to the DSLR’s microphone input. For more flexibility, the microphone will be connected by XLR to a mixer which will either perform the ADC itself and feed into the computer via USB and DSLR via headphone monitor or the mixer itself may in turn feed via XLR to a standalone ADC.

Finally, for professional vocalists and musicians, it is worth noting that an additional criterion in deciding on an ADC may be how well it integrates into your DAW environment. High end ADCs often include custom ASICs that permit select audio plugins to run entirely in the ADC, offloading processing from the recording computer and may also be certified for specific packages.

The Perfect Microphone For You

Contrary to what you might read on the web, there is no single “perfect” microphone. Even when price is not a consideration, the $10,000 Sony C-800G is not necessarily the best microphone for every application. Once you reach past the cheapest of microphones, selecting the “best” microphone for a given purpose is really about selecting the “character” of the sound you want. As the Slate VMS ML-1 reminds us, every microphone has a distinct aural profile that is instantly recognizable, from the bass-laden boom of the radio and podcasting staple Shure SM7B to the warmth of the Neumann line.

The only real way to find the best microphone for you is to search YouTube for recordings of that mic and look for bakeoffs comparing it to other mics to listen its distinctive aural signature and hear for yourself which you prefer. Try to find someone with a voice that sounds similar to your own to really understand how it sounds in practice.

I strongly recommend listening to videos made by professional musicians and audio engineers who have paired the mics with the appropriate equipment and adjusted it to their voice, rather than videos by “influencers” who rarely have their equipment ideally configured. For example, the SM7B by default tends to sound extremely bass-laden which might not be your ideal sound, but this acoustic signature can be readily adjusted by the dipswitches on the back to adjust its bass rolloff and presence boost, largely eliminating that bass boom. Thus, while you will encounter quite literally tens of thousands of YouTube videos of podcasters with deep booming voices from that microphone, a quick search of professional audio engineer tutorials of the mic shows just how differently it can sound. Similarly, simply swapping out one ADC for another can make a microphone sound entirely different, so it is important to watch videos that have the microphone properly configured and that also list the entire pipeline being used.

Do you want a warm buttery sound or a cold crisp sound? Do you want harsh precision or softer rendition? Much like your choice of visual background comes down largely to taste, so too does microphone selection.

It is worth noting that those recording strictly spoken word programs will likely choose a different microphone than aspiring vocalists who in turn will likely choose a different microphone than a musician. In fact, vocalists may even switch microphones multiple times in a given song to leverage each one’s distinctive sound. Different instruments also often have different microphone selections they are most often paired with.

Again, the best way to select the best microphone for you is to turn to YouTube and listen for yourself. Though, as we’ll come to in a moment, it is critical to remember that there are multiple stages in the audio pipeline, including the ADC and intervening hardware like gain boosters and mixers. Look for videos where the presenter plays the raw unprocessed audio and clearly lists the exact audio pipeline they are using, such as an SM7B using a Cloudlifter and plugged into an Apollo Twin ADC. Remember that the ADC can have as much of an influence on the sound of a video as the microphone itself.

There are also a number of technical aspects to consider when selecting a microphone.

A critical one for video is the address distance. Some mics are designed for the speaker to be 6 inches away from the microphone, meaning it can be safely hidden off camera, while others require the speaker to speak directly in the microphone from an inch or less away in order to generate their distinctive acoustic profile, meaning the microphone will be a visible part of your videos. Some microphones like the SM7B are known for their pronounced proximity effect, while others like the RE20 are popular precisely for their lack of such distance-based aural differences. Similarly, for those needing much greater distance to their microphone, such as sitting in the middle of a room with the mic several feet away from them will need a boom mic. For those who plan to travel with their microphone, some designs are more fragile than others.

OTHER CONSIDERATIONS

When setting up a home video recording studio, there are many other small details to consider.

Stray Light & Sound

It is important to minimize acoustic noise and uncontrolled stray light in the room. Don’t rely on outside natural light, since it is unpredictable. For example, if you begin a video conference in the late afternoon, when sunset hits, you may have sharp light directly in your camera lens washing out your video or a pack of clouds could suddenly plunge your video into gloomy dullness.

Wired Internet

While modern wireless internet is extremely fast and low-latency, it can exhibit unpredictable speed changes that disrupt live streaming, so we strongly recommend using wired internet. MacBook Airs can use a wired internet adapter.

Power Management

When using a MacBook Air, keep in mind the maximum power budget of its two USB-C ports. You can safely fit a USB HDMI capture device and a Blue Yeti mic on one port along with a wired ethernet adapter, but if you begin to put too many devices on one port you can overdraw it and see strange behavior such as random disconnects or unusual errors. This holds for any computer, including desktops, but is most acute for laptops.

USB Hub

If using a MacBook Air or other capture computer with a small number of ports, you’ll almost certainly need a USB hub. Make sure it has the internal bandwidth to support USB 3 across all ports simultaneously.

Thermal Management

When shooting indoors with air conditioning, you likely won’t run into any issues with overheating equipment, but keep in mind that when doing long video shoots at Full HD or 4K, your camera, capture card and computer may all begin running extremely hot. If shooting outdoors or in an extremely warm unventilated space (as some home recording rooms are due to excessive noise padding and vent diversion), keep in mind that it is possible to encounter thermal slowdown or shutdown. Fans should be avoided since they introduce noise – instead try to position equipment in a way it can be more easily cooled by the ambient air, such as elevating the laptop on a perforated metal stand that allows greater airflow around it.

Power Backup

Even in areas with extremely reliable power, it is possible to have brief power blips or brownouts, especially during hot summer days or during wind and ice storms. If using a laptop as a recording device, it obviously is able to use its internal battery, but even a slight power blip could be enough to cause your camera to disconnect or your lights to dim, causing a noticeable visual change in your video.

We strongly recommend powering the camera and lights from a UPS battery backup system if these considerations are important. Indeed, after two ruined takes from power instability during a particularly warm day, we now run our entire studio off UPS power.

One thing to note however is that consumer sinewave UPS units do not generate perfect sinewaves, which means that precision audio equipment will yield suboptimal results when operating on UPS power. If using professional audio equipment, consult its power requirements, as units may have very stringent power requirements.

Even when not using a UPS unit, remember that your camera is connected to your laptop over that HDMI cable. That means that even if your laptop is connected to a surge protector, if your camera is simply plugged into a wall outlet, a surge that hits your camera will travel over that HDMI cable to your computer and destroy it as well. In turn, if your camera is plugged into wired internet that connects to a switch into which the rest of your computers are plugged, you could risk a lot of damage. Make sure that at the very least, you plug everything from your camera to your lights into a surge protector.

It is worth noting that when shooting the field, many higher-end lights have the ability to run off battery power, meaning you can run your camera, lights and laptop entirely in the field. If using an external power source like an electronics-safe inverter generator, make sure that everything is plugged into the same unit to ensure a common ground.

POST PRODUCTION AND EDITING

For live streaming, there is nothing more you need to do – you simply stream the camera and audio output as-is! For prerecorded talks, you may decide not to invest in further production and simply release the video as-is, which in many cases may be “good enough.” Many popular tutorials are merely recordings of conference talks or past in-person lectures.

To add production value, however, you can use software like OBS Studio to dramatically improve the quality of your live productions, doing everything from live correction of video and audio to complex effects like combining not only multiple video feeds but even live data from the web, including dynamic web pages, into your feeds. In many cases you can perform all of the transitions, titling, corrections and visual effects you want live while filming using OBS.

For the highest production value, including blending multiple takes and polishing video and audio, you’ll want to use nonlinear editing software like Adobe’s Creative Suite or Premiere Elements or similar.

Live Editing Using OBS

The Open Broadcaster Software (OBS) package is a defacto standard among both gamers and many other kinds of live streamers for its limitless array of tools that can do everything from live correct video and audio to blending multiple streams with multiple picture in picture, high-quality titling and transitions and the ability to draw in live external data into overlays, including dynamic live-captured web content. You can create complex broadcast-style workflows using hotkey macros to allow you to instantly transition from full-frame video to full-screen computer capture with video PIP with beautiful dynamic overlays drawing in live web data, back to full-screen video, switching to another camera, displaying an interstitial screen, etc.

There are an infinite array of tutorial videos on YouTube showcasing the full extent of OBS and the kinds of capabilities it supports. For many users, OBS and its peers may actually provide all of the video production capabilities you need. Keep in mind, however, that tools like OBS operate live, meaning they aren’t editing packages that let you go back and record a segment and splice it in. They require you to script out your video by determining apriori the kinds of title screens, overlays, interstitial cards and content you wish to use, create those building blocks and assign them to hotkeys to use in realtime. Picture the production room of a television studio cutting back and forth between video streams and you have an idea of both what OBS is capable of and the kind of live workflow it requires.

A benefit of live editing like OBS is that you don’t need any local disk storage – you’re simply streaming everything live out to a service like YouTube/Twitch/etc. This means that filming a 12-hour marathon conference in compressed 4K doesn’t require 700GB of local high-performance storage. The downside is that you can’t blend takes together or create bespoke transitions, titles and effects tailored to the slight nuances of the actual filmed content. You also don’t have access to the kinds of sophisticated offline effects such as the precise audio correction and adjustment available offline on a DAW (though it is worth noting that for those who are less price-conscious, accelerators exist from companies like Apollo and many others that can apply many of these filters in realtime for livestreaming). You can also readily exceed the available resources of your system, especially when applying highly expensive visual effects to high-performance game captures. (Though more advanced users will often run OBS on a separate computer to isolate it from their recording machine).

In the end, when first getting started you may find that just hitting “record” on your camera is “good enough” for your needs. For those that want to add production value to their streams and tutorials and create more complex visual layouts, OBS will likely meet the needs of most users, complete with large-scale PIP and external content layering and a full array of transition and titling options.

Do keep in mind, however, that live streaming requires you to have sufficient bandwidth to support your desired resolution level. Streaming at 4K requires a reasonably capable internet connection that may not always be possible for all home studios. In those cases, prerecording your videos and uploading them after the fact may be the only option.

For those that desire the highest video and audio quality and, most importantly, want the ability to record multiple takes and splice them together to fix errors, condense an interview down to its most succinct parts, etc, you’ll need to use what is known as “nonlinear” video editing software.

Video Editing

For those that want to add additional production value to their videos, even relatively simplistic editing can yield dramatic improvements, from adjusting brightness and tint to compensate for poor lighting to removing noise and cleaning audio captured under less than ideal circumstances. While live software like OBS can perform many similar tasks, more sophisticated filters can’t always be performed in realtime, while only offline editing offers the ability to rewatch a clip over and over again to tune it to perfection.

Interactive nonlinear editing systems today are relatively straightforward to use, allowing even novice users to perform the major editing and polishing tasks they need, while offering a wealth of powerful editing and improvement tools as users get comfortable with them.

If budget is not a major limiting factor, Adobe’s Creative Suite uses a monthly subscription fee and offers a nearly limitless wealth of video editing tools that cover most use cases, though Adobe’s Premier Elements offers a reasonable cross-section of editing tools for a one-time purchase.

Typical editing tasks include splicing together segments from multiple takes, cleaning and polishing audio and video, adjusting for lighting and acoustic deficiencies and adding titles. Even Premier Elements allows you to readily blend paired live camera and screen capture takes, creating a picture-in-picture (PIP) view of the speaker overlaid on top of the slides similar to the Google DevRel talks or cutting back and forth between fullscreen views of the video and slides throughout the talk.

For those interested in adding additional visuals, music and other acoustic effects, make sure you have the proper licensing – many editing packages offer purchasable extension packs or subscriptions of vast galleries of royalty-free or use-specific stock content.

YouTube offers a wealth of ideas and step-by-step tutorials on how to achieve different effects in each video editing package.

Editing Hardware

What about the hardware needed for video editing?

While it is extremely adept as a portable recording system, a MacBook Air does not make for a good video editing platform due to its limited processing and memory capabilities. At the same time, you don’t need a Mac Pro or massive workstation. Gaming rigs are actually unsurprisingly adept as video editing systems due to their oversized CPU, memory, graphics and interconnect specifications. In a pinch, even a MacBook Air could potentially serve as an editing system, though with considerable latency and limitations.

At its most basic, you can simply transfer the video files from the camera or recording computer directly to the editing computer. Modern high-speed memory cards support extremely fast transfers when copying directly from the camera, however keep in mind that accessing the memory card on many cameras, including the SL3, requires accessing the battery compartment whose door may not open sufficiently wide with many tripod plates, requiring you to remove the camera from the plate first.

Copying files directly from the MacBook Air to an editing computer is readily achievable through many means, including USB drives, but one avenue to consider, if you intend to do a large volume of regular video editing, is the use of a NAS.

Rather than copying files to and from the editing workstation, for studio use we recommend the use of a local storage fabric like a Synology NAS, which comes in many different configurations and can be equipped with RAID-optimized drives such as the helium-filled Western Digital Ultastar DC series for reliable longevity under heavy sustained load. Many organizations already have NAS solutions deployed as part of their backup solutions and so can simply repurpose them as a RAID backing to their editing suite.

The benefits of a NAS like a Synology unit is that it can be mounted as a remote file server on both the MacBook Air and the editing workstation, allowing seamless file sharing. You can load files directly into the editing software while others are copying. While we recommend recording your videos directly to the local disk of the MacBook Air and then copying to the NAS afterwards, it is possible under many recording scenarios to record directly to the NAS mount, but you will need reliable networking and must ensure you don’t exceed your available bandwidth or NAS cache depending on the bitrate and number of parallel streams.

In our case, all videos are recorded locally to the MacBook Air and copied directly to the NAS, with the editing workstation pulling the files directly from the NAS rather than copying them locally to the workstation’s disk. With a 24-port gigabit switch we have not observed any latency from this configuration and are able to perform realtime editing over a full-length video at 1080p60 without any network-induced delays. For those transferring and editing larger numbers of simultaneous streams in 4K, many of the Synology NAS units can be upgraded with 10Gb network cards. While the Mac Air supports only 1Gb wired networking, upgrading the NAS and editing workstation to a 10Gb card allows an optimized path between the NAS and the editing workstation (assuming it has a spare PCI slot of the appropriate speed and the CPU to handle the additional processor load). Though, once again, we’ve not experienced any issues with 1080p60 editing under gigabit networking, even when blending up to three 1080p60 streams together.

To ensure optimal audio quality of your finished product, you’ll also need a pair of high-quality earphones or a quality receiver and monitor speakers. We use a Yamaha receiver with four speakers positioned in a surround field around the editing suite and have calibrated it for our specific room’s acoustics and recalibrate it when there are any substantive changes to the room, but here a reasonable set of earphones might work better for most users. If you are specifically targeting mobile users, you might also test it on AirPods and other mobile earphones to test their specific response curves against your video’s acoustic profile.

Publishing

If your intended audience is the web, most video editing packages make it trivial to export your video as a web-optimized MP4 file which you can readily upload to YouTube. Keep in mind that higher quality settings are not always noticable when web-streaming and if your primary audience largely consumes your content on mobile devices, it may be worth trading off the “quality” settings in your editing software’s video export workflow. For 1080p60 videos uploaded to YouTube, we’ve not typically observed a noticeable difference between “medium” and “high” quality MP4 exports, but experiment to see what settings work best for your content. The smaller the video file, the faster it is to upload to YouTube and the less local disk it consumes, as well as being faster to render and export from the editing software.

For those targeting DVD or BluRay distribution, many editing suites like Premier Elements actually have built-in support for exporting the necessary file formats and even adding compatible menuing.

After you publish your video, we recommend you keep the underlying video and audio files and other assets readily accessible. This makes it possible to reuse assets, such as a stock “thanks for watching” clip. Most importantly, however, it ensures consistency across your videos, with the exact same openings, closings, title and credits cards and other assets being used across each video. More subtle characteristics like specific audio and video filters and effects can be complex to create, especially if they involve nonlinear and time-based keyframes, so we recommend beginning each video in a series by copying the editing files from the last video and simply swapping out the video and audio assets and adjusting from there to ensure consistency.

That’s It!

We hope this tutorial was useful to you!