A way too long introduction to retro consoles, modern TVs and gameplay capture.
There are a couple of steps that might be necessary to perform on freshly captured videos. All of the basic steps can be performed by open source software like VirtualDub.
Interlaced videos contain two half pictures (= two fields) in one frame and are usually saved with 29.97 frames per second. If you watch them on a PC without deinterlacing you will notice a comb effect in motion.
There are two cases of interlacing that can happen with captured video games, natively interlaced games and 240p games that were recorded as 480i.
Real interlaced video
In this case the video has to be deinterlaced by one of the many deinterlacing algorithms. Depending on the algorithm you choose you will end up with a 29.97fps or 59.94fps progressive video. Good algorithms to deinterlace games are Yadif and QTGMC. Yadif should be available in most programs, while QTGMC is a bit more complicated to use and will require a lot of processing time. The guide page shows an example on how to deinterlace 480i game content.
240p games recorded in 480i
This step is not complicated, but using the wrong algorithm will destroy the quality of your video. The correct way to handle most videos like this is to duplicate the fields and double the frame rate, starting with the top field. The deinterlacing filter in VirtualDub will do this correctly, converting a 480i 29.97fps video to 480p 59.94fps. In some rare cases you have to start with the bottom field. Choosing the wrong starting field messes up the order of the frames and is obvious to spot. Depending on your capture device there might be additional steps necessary to remove flicker. A great tool to encode videos like this is yua. There is also Fudoh's "all fixing" VDub/AviSynth package for 240p games available from his site. It contains a filter plugin for VirtualDub, a saved processing chain and an AviSynth script to load the original recording into VirtualDub.
A simple step in post processing is fixing the aspect ratio. Unless you are capturing from a scaler that already ouputs the correct aspect ratio, it is required to change the aspect ratio to match either 4:3 or 16:9 depending on the game. Most capture devices record at a 720x480 resolution, which is slightly wider than what you would see on a TV. To fix it you resize the image horizontally to either 640 pixels for 4:3 or 853 pixels for widescreen content. This should be done with the bicubic or Lanczos3 algorithm. It doesn't matter if the console renders the image with a horizontal resolution of 256 pixels (SNES) or 384 pixels (CPS), all games are designed to be displayed with a 4:3 (or 16:9) aspect ratio. If you skip this step, round objects will appear slightly oval.
To change the resolution of your captures you can just stretch it, but it pays off if you know about different basic scaling algorithms. The following four algorithms are the most common you will encounter, but of course there are countless other algorithms as well that have their own strengths.
Nearest Neighbor - This algorithm just duplicates pixels and will retain sharp edges. It should only be used with integer scaling factors. If you want to convert a 720x240 recording of a 240p game to 720p while still keeping it sharp you should first scale it vertically by a factor of 3. To correct the aspect ratio from the new 720x720 resolution to 960x720 you should use one of the following algorithms in the second step.
Bilinear - Bilinear filtering uses bilinear interpolation to merge a group of 4 pixels to create a pixel in their center. The result is very blurry, but it is a very fast algorithm that is used in many application and games.
Bicubic - The bicubic filter uses bicubic interpolation, which is a more sophisticated algorithm that creates a smooth image with a low amount of artifacts. It is well suited to scale images and doesn't need an integer scaling factor.
Lanczos3 - Lanczos filters are often considered as a good compromise between the higher quality scaling filters. It requires more computational power than the bicubic filter and produces slightly better results in many cases.
If you have captured a video from a source with Limited Range RGB output on a card that supports Full Range RGB you might have to expand the colors from 16-235 to 0-255. This is a trivial fix that can be done with the Levels filter in VirtualDub or similar options in other programs.
When you're working with videos you have to know about Rec. 601 and Rec. 709 which are recommendations on how video signals in the YCbCr color spaces (including YUY2) should be processed and shown. Rec. 601 applies to SD content (480i/p) and Rec. 709 applies to HD content (720p and above). If you're using a program that tries to decode a video with the wrong color matrix you will end up with slightly wrong colors, especially green will look odd. Rec. 601 doesn't specify a color gamut directly, but NTSC SD content uses the SMPTE-C color space by convention. Even if you are recording from an RGB source you will most likely have to work with YCbCr since most consumer recording devices change the video to YUY2 or similar internally.
The above screenshot shows how VirtualDub opens a Lagarith encoded capture of the Framemeister's output at 720p by default. Even though the video is in a HD resolution it is decoded with the SMPTE-C color gamut. The result looks wrong, but if you don't know about the differences between the two you might not notice the problem and try to fix it by changing the contrast or the saturation. The correct version of the picture is shown in the second image. In VirtualDub this is fixed by adding the "alias format" filter and forcing Rec. 709 (HD). Update: The "alias format" filter in vdub doesn't work correctly. You have to use the ColorMatrix AviSynth plugin.
If you're capturing footage in resolutions from 240p up to 480p you might also run into problems when you upscale the videos to 720p. For example, if you open a 240p video in VirtualDub and you increase the size to 720p and then encode it with x264, it will show up wrong in media players. To fix this you need the ColorMatrix plugin in AviSynth which can convert the color gamut to the correct levels for HD video. Please note that other video editing programs might not have this problem.
Analogue RGB signals from retro consoles use composite video or composite sync in addition to the color information. This type of sync doesn't contain information about the horizontal pixels. This is commonly called RGBS as opposed to RGBHV, where H and V stand for the horizontal and vertical sync. RGBHV is used in more modern devices for VGA for example. On CRTs it doesn't matter that there is no horizontal sync information, because each line of the picture is drawn across the whole screen. On LCD screens and for capture this is a problem. In the process of creating a digital version of the signal the picture lines are sampled and pushed into a mask of pixels. The number of pixels is fixed and doesn't take the original console into account. Usually there are 720 or 640 horizontal pixels after the sampling step.
Retro consoles usually only have a low amount of pixels in the horizontal resolution. These pixels are stretched to fill a whole line on the screen. In the case of the SNES there are 256 pixels horizontally in each line on the console. To fill the screen the pixels are slightly stretched, resulting in pixels that are wider than tall. The actual pixel ascept ratio (PAR) for the SNES is 11:10. On a PC each pixel has the same width and height, or in other words a pixel aspect ratio of 1:1. In the sampled version of the signal each non-square pixel from the console influences the color of 1 or 2 horizontally adjacent square pixels making it slightly blurry on the horizontal axis.
Let's have a look at a real example, a screenshot in 720p of Super Mario All-Stars on the SNES converted to a digital signal and upscaled by the factor 3 by the Micomsoft Framemeister and captured on a PC with the Micomsoft SC-500N1 capture card. The picture is perfectly sharp vertically, because the analogue signal contains the vertical sync information and each line of the source image ends up in a set of 3 seperate lines of pixels in the sampled picture. Horizontally the picture is a bit blurry though. This is because the sampling created each pixel by one or two neighboring pixels of the source.
With additional information about the console the picture can be improved. By horizontally scaling the picture down to its original 256 pixels horizontal resolution with the nearest neighbor algorithm the original pixels are restored. Afterwards the picture is stretched back horizontally by the factor 3 with the nearest neighbor algorithm. The result is a very sharp picture that looks close to an emulator. The picture shown here is a best case scenario. Often there will be small color distortions in some columns. If the picture is scaled down to a wrong resolution the result will be a total mess, even when the target width was only 1 pixel off.
At the end the picture is scaled back to the correct aspect ratio with the bicubic algorithm. The result is a significantly sharper picture than the original capture. This procedure can be done with video editing programs like VirtualDub or picture editing programs like Photoshop.
Note: The picture above is indeed from a recording on my SNES, but it is a best case scenary. This method doesn't work that well with systems that have more than 256 pixels horizontal resolution.
To save disk space and upload bandwidth the videos should be encoded with the x264 encoder. A simple yet powerful tool for this task is HandBrake. It's an open source program that encodes your videos in just a few clicks. Another recommended software is MeGUI. This tool bundles a lot of other application and offers all possible options to encode your videos with x264. However to use MeGUI efficiently you should know your way around with AviSynth scripts.
Of course you can also use any video editing program and use your gameplay footage as clips for a bigger video. A reasonably priced product for this is Sony Movie Studio Platinum Suite 12. It doesn't offer as many options as the professional offerings from Sony and Adobe, but it should be enough for a start.
x264 offers a lot of options, like lossless encodes, different color spaces and tons of advanced features. To start encoding though you only need to set the option for the picture quality, which affects the file size of the encode. The best way to do this is by the CRF (Constant Rate Factor) setting. It is a value between 0 and 51* that defines the overall quality of each scene. You should use a value between 16 and 24, where lower values give better quality at the expense of higher file sizes.
*(or 63 if you're doing 10-bit encodes instead of the standard 8-bit)
In the past it was common to set an average bitrate and do a 2-pass encoding, but this method is outdated unless you have specific file size limits. Using the CRF will give you lower file sizes if you're aiming for good quality.
One important attribute of video is color subsampling. Usually there are two chroma channels in digital YCbCr video, Cr and Cb. To lower the data rate and file sizes the chroma channels often use a lower resolution than the luma channel which determines the actual resolution of the video. Most commonly this is done by cutting the chroma resolution in half (4:2:2), which reduces the data rate by a third without a visible impact on the video once it is in motion. Most capture cards use this mode (YUY2).
Another very popular mode is 4:2:0 which includes only the color information for each alternate video line. To display images that were encoded in this format the player has to upscale the color information. Depending on the algorithm and the processing power of the device the resulting quality will vary substantially. 4:2:0 is used as YV12 on DVDs, BluRays, H264 and many other applications. To watch 4:2:0 content on a PC I can only recommend MPC-HC together with madVR and a dedicated graphics card to do the upscaling. Another mode is 4:4:4, which is the term for video that doesn't downsample the color information. Only professional capture hardware can process either RGB or YCbCr 4:4:4 content without lowering the quality to YCbCr 4:2:2.
4:2:2 can also be used in H264 videos by setting a few parameters for the x264 encoder. The quality will be higher, but it will also result in slightly higher file sizes and not all media players might support this format.
Like other console and PC games, retro games can be streamed to services like Twitch.tv with programs like OBS or XSplit. If you're not familar with Twitch you should spend a few days in some of the streams to get a feel for the site and streaming in general. Most streamers will use emulators, which often result in input lag problems, inaccurate emulation and horrible blurry shaders. With the capture devices that were discussed on this site you can stream your retro games from the actual consoles. The quality you can get surpasses even the offers on the Virtual Console for Wii U and the PSN versions of PSone games.
The simplest way to stream is to use screen capture to capture the preview window from your capture program. This should work in any case. If you're using a DirectShow device in AmaRecTV you can use the Live functionality to use AmaRecTV as a source for your streaming programs and record with it locally at the same time. Some capture devices like the Elgato Game Capture HD60 even come with software that has all the streaming functionality built in.