Click here to read the original article in ProSystems News Africa
As consumers living in the 21st century, we are almost constantly surrounded by video displays. Video has become an extremely popular and effective communication platform and is widely used for a large variety of purposes. Apart from cinemas and television broadcasts, people share many things from news, social media content and product marketing to showcasing talent. Whether the talent is amateur or professional, the same video platform is utilised. The consuming audience experiences all this by watching a video screen showing video content, but behind the scenes, from a professional AV perspective, it’s an entirely different ball game.
As consumers living in the 21st century, we are almost constantly surrounded by video displays. Video has become an extremely popular and effective communication platform and is widely used for a large variety of purposes. Apart from cinemas and television broadcasts, people share many things from news, social media content and product marketing to showcasing talent. Whether the talent is amateur or professional, the same video platform is utilised. The consuming audience experiences all this by watching a video screen showing video content, but behind the scenes, from a professional AV perspective, it’s an entirely different ball game.
Every video
system has three elementary components which comprise the system. There
are the source devices, the sync devices and the connection between the
two endpoints to complete the link.A source device is similar to the
‘start’ point of the particular system’s signal. This is the device that
outputs or produces the video signal and it could be various different
components. The most common examples are DVD or media players, personal
computers and computer servers with video output ports. Another popular
example of a source device is a video camera, producing and transmitting
a video signal. The sync devices form the other end of the line and
they are the video screens on which the playing video file is viewed.
Multiple technologies are used to display video such as television
screens, modular LED panels (digital billboards or big screens at sport
events and concerts) and video projectors which project light onto a
reflective display surface. The latter is very common in presentation
venues, cinemas, auditoriums and concert stages. The connection between
these endpoints is where the challenge lies.
At
home it might seem like a really simple task to connect a DVD player to a
television with an HDMI cable and Bob, as they say, is your uncle. In
the professional video industry, systems can be a lot more complex
depending on the application. Some systems require more than one video
source to be routed to a single display as used in boardrooms with
multiple connection points. Others require a single source to be
distributed to multiple display screens similar to systems in airports
and shopping centres. Apart from these there are systems that require
both of the above in one solution, i.e. multiple sources to be routed to
multiple displays at any desired configuration. This also requires the
functionality to manipulate a current configuration and reroute any
source to any screen or screens, with the press of a button. The above
challenges are all doable as equipment is readily available to meet
these needs as long as the video designer understands the system
architecture. Accurate design will ensure that the correct components
are used and configured accordingly. The users can then take charge and
create configurations however they desire. As mentioned, the real
challenges lie in the connection between source components and display
screens. Systems such as these are mostly based on unique requirements
and are designed to provide custom solutions in response. Therefore
the challenges that accompany signal distribution are not exactly black
and white.
Video signals are based on analogue
waves which are transmitted over a specific medium, such as copper and
fibre cables. Depending on the format, and the resolution of the video
signal being sent, there are distance limitations. Over longer
distances, the physical environment causes signals to weaken until they
are no longer viable for video reproduction. Within analogue signals
such as RGBHV (Computer graphics or VGA) or composite video (standard
definition video from a player or camera) for example, the amplitude of
the physical wave is what is important. A transmitted signal’s energy
will decline as it travels further along a cable. An additional
challenge is the fact that radio waves constantly exist in the air all
around us, which then interfere with the cable and transmitted signal.
This is called radio noise and although this noise is mostly fairly
weak, it does cause interference. As radio noise exists everywhere, the
interference happens across the entire length of a cable and the noise
is continuously present anywhere on that cable. This phenomenon is known
as the noise floor of a radio signal.
Coaxial
cables, used for analogue signal distribution, are designed with a thick
copper core and a foil or wire strand braiding around the outer
diameter. This cable design functions as a Faraday cage and effectively
reduces noise interference. The noise, however cannot be eliminated
entirely. Thus from the available amplitude of the original signal, one
can only utilise the section superior to the noise floor. This section
is known as the signal-to-noise-ratio and needs to be sufficient enough
in order for the video to be reproduced. Analogue signals can be
amplified to increase amplitude or rather increased wave size. This is
done in an attempt to have the waves travel longer distances. In some
circumstances, the amplifying component cannot distinguish between the
video signal and the noise floor and as a result it amplifies all the
waves present on the cable and produces an increase in signal amplitude
as well as an equal increase in the noise floor. The result is that the
signal-to-noise-ratio remains unchanged. Due to this phenomenon,
successful amplification needs to be applied nearer to the source where
the signal-to-noise-ratio is still sufficient. Once it is no longer
adequate, the signal will be present but not clear and would not suffice for video reproduction.
Certain
applications require video transmission over such large distances that
constant amplification fails to qualify as a viable solution. Balun
technology was developed to extend signal ranges. The architecture
requires a transmitter and receiver with a CAT cable in between. Baluns –
a name derived from the terms balanced and unbalanced – uses technology
similar to balanced audio systems where the video signal is duplicated
at the transmitter end and both signals are sent over multiple twisted
pair cables to the receiver end. The only difference between the two is
that the second signal’s frequency is inversed to a negative version of
the first. The noise floor, however, cannot be manipulated and thus
remains constant and equal on both signals. The receiver end then
collects from the two signal feeds, a positive video signal, a negative
video signal, and two X positive noise floors signals. All of these are
compared and all matching signals – in this case only the noise – gets
eliminated, leaving only the two X video signals of which the positive
one is used, noise free.
In the digital world
things work a bit differently. Digital video is not the only component
to be distributed. Many formats exist such as DVI and SDI to cater for
different applications but HDMI is the most common signal for High
Definition digital video distribution. However it’s not video alone
but rather a multi-media format which hosts a collection of additional
signals such as audio feeds, power, Ethernet, EDID (Extended Display
Identification Data) which is used to automatically identify a matching,
optimum resolution between a source and a display, and lastly HDCP
(High-definition Digital Content Protection), a protocol initiated by
film producers which prohibits end users from duplicating copyrighted
content. This is a large amount of information to be transported and the
distance limitations are much bigger than with analogue video.
A
digital video signal is still an analogue wave but the polarity is
manipulated to be either positive or negative. A positive wave
represents a 1 and a negative wave represents a 0. In a previous article
I explained how the analogue metric information is converted to a
binary string. Binary numbers consist out of only 1s and 0s and the
strings can thus be sent over larger distances. The receiving end
collects the strings with 1s and 0s and reconstructs the information
prior to being converted back to analogue signals, for human beings to
comprehend. The noise floor is still present but the signal-to-noise
ratio is irrelevant as the receiver only needs to establish whether a
wave’s amplitude is positive or negative in order to identify the
strings.Thus no information is lost. Unfortunately though as soon as the
signal is interrupted or becomes too weak and it loses as much as a
single digit, the strings become corrupted and initial information
cannot be established, nor reproduced. The entire signal is then
obsolete. In other words, a digital signal works 100% or 0%. There is no
in-between. This is known as the ‘cliff-effect’.
In
spite of this amazing technology, HDMI is still limited and as soon as a
cable exceeds 15m, an alternative solution is required to distribute
the signal. The beauty of these challenges within the current
technologies is that they receive a lot of attention and funding.
Solution focused technologies are constantly seeing the light of day.
Range extenders similar to baluns exist and are a popular solution. A
few years back, HDBaseT was introduced to distribute HD digital video.
HDbaseT uses high quality Cat5e and up to distribute media signals,
control signals and electricity to power equipment (depending on their
consumption) all over one cable. HDBaseT technology was really well
received and many display manufacturers invested to get their products
to accept HDBaseT as an input, thus eliminating the receiving component
of the HDBaseT distribution system. A further popular way to extend
digital signals is to use fibre systems instead of copper. Fibre
transmitters send light along a fibre pipe or cable at much longer
distances than copper because of the lower resistance. The same strings
containing 1s and 0s are sent and the same principle applies at the
receiver end.
The latest and greatest form of
distributing video (and media signals) is to convert them to IP packets
and distribute them over an IP network. The world is full of IP networks
and in principle it makes sense to simply connect to a network and
introduce the media signal to the network and then collect it anywhere
else on the same network, or even at multiple places, at different
times. This great idea was halted as soon as the bandwidth requirements
became evident. Video has been streamed over networks for many years but
it has had to be compressed to such a low quality in order to make it
viable to send and download. Like anything in today’s world the
technology caught on and video over IP is becoming more and more
available. Technology even exists to distribute 4K resolution
uncompressed, that still supports HDCP. Due to of the massive amounts of
bandwidth required, the network limitations needs to be 10Gbps – which
is not excessive anymore. Slowly but surely the IT world will upgrade
networks to 10Gbps. The only other obstacle for video over IP systems is
to convince the IT managers and network-security teams. The latter
might be the biggest challenge but it will have to be addressed.
No comments:
Post a Comment