Tuesday, 17 October 2017

Vibrant Video Logistics

  



The term ‘plug and play’ has been around for decades and mostly pertains to the ease of use of technology systems. It’s especially appealing to non-technical consumers prior to opening the box of a new technology-related product. But, at least for the AV professional, it’s often a different story. Video communication is literally stealing the show in the modern age - from marketing platforms to information management and entertainment. For the viewer, it’s as simple as pressing a button and selecting what to watch, or even watching whatever is already displayed. The intricate part, however, is delivering the content from the source to the screen – a process unseen by the user. This has been the cause of many a grey hair for the AV professional. Video distribution has its limitations, which need to be known when designing signal distribution systems. As technology develops, new signal types also come with new challenges.

The most common signal type used in current day video is HDMI (High Definition Digital Media Interface) because of its diverse design. With the introduction of high definition satellite television to the residential market, digital video connection between source and screen was done with HDMI. It was a simple, fresh and effective new component that arrived with the whole new world of high definition television. In fact, HDMI was developed to carry a variety of signals in a single cable. This primarily includes a video component, which is capable of transporting video signals in extremely high resolutions, and an embedded digital audio signal that delivers adequate audio information for the most recent surround sound variants. These two components are sufficient to deliver a comprehensible video signal, but HDMI technology exceeds this by far and additional information is available on the same cable infrastructure

Thus, over and above the audiovisual components, HDMI 1.4 includes an Ethernet channel, enabling high-speed, bi-directional network connectivity at up to 100 Mbps. Display resolution is recognised with EDID (Extended Display Identification Data), which is available to identify the native resolution of a display as soon as it is connected to a video source. The source component will respond by transmitting the content in the optimum resolution that the screen is capable of receiving. In most cases it will be full HD (1920x1080) or, these days, even UHD (3840x2160). Challenges include, for example, sending a full HD signal to a WXGA projector with a native resolution of only 1280x800 – which would be incapable of displaying the complete pixel space of the source image. In this scenario, one of two adjustments can be made to display the content. Ideally, the display device would scan convert (downscale) the image to match its own native resolution. If this is not possible, the source device would reduce its own output resolution in order to meet the display. The latter is not ideal for networked video distribution systems, as the source content’s quality would then be reduced on all connected displays - regardless of their capability to meet a higher definition video.

HDCP (High Bandwidth Digital Content Protection) is the main element behind the development of HDMI as a standard video format. For decades, Hollywood producers have been fighting the piracy war in spite of content duplication being legally prohibited. It has always been a losing battle as there is no effective way to prevent consumers from duplicating analogue video. In a digital world, many parameters can be introduced to a signal to only allow duplication when certain criteria are met. HDCP did exactly that by creating a standard that couldn’t be legally duplicated unless all components in the particular system conforms to a licensed process. Law enforcement is much easier when all stakeholders in the video supply chain are forced to conform, instead of just the end consumer. HDMI’s trump card is that the source component will not transmit a video signal unless a digital handshake takes place to confirm that the display device is licensed as well. Recording equipment will not tick all the required boxes and hence remain unlicensed. A source device will thus not release a video signal connected to a recording device. In layman’s terms, HDCP prevents the digital signal flow unless the display device is HDCP compliant. With economic pressure, the manufacturers of video players and displays are very eager to license their products to avoid paying the price of non-compatibility. Unfortunately, there will always be electronic devices on the black market that override these parameters.

CEC (Consumer Electronics Control) is another feature of HDMI which enables users to control multiple connected devices with a single control interface. A DVD player, for example, can be controlled via the connected HDMI feed, or a third-party control system can control sources and display devices from a single user interface. With all the above components forming part of the HDMI signal, bandwidth requirements are clearly extensive and, as a result, distribution distances are limited. A full HD signal should not be distributed farther than 15m. In a standard residential dwelling this will be adequate, but in larger residences and in professional systems especially, the distribution becomes a challenge. Cables are available at lengths of up to 22m and will work well with lower resolutions, but this might be problematic as the bandwidth increases. In these larger systems even 22m cables won’t be adequate to distribute signals to all displays.

With these distance limitations, an entire new world of opportunities has opened up. Many technologies are available to distribute HDMI - each with unique architecture and challenges. There are balun transmitter- and receiver sets that distribute the HDMI signal components over twisted pair cables, and HDBaseT that uses the same infrastructure but also transmits additional signals such as control protocols and even low voltage power supply on the same cable. Fibre optic mediums are positioning themselves amongst these technologies and, although expensive, are capable of transmitting digital signals across longer distances.

The latest technology breakthrough to see the light is based on the IP infrastructure available in everyday IT networks. These two technologies are encroaching on the other’s territory as they improve. The challenge has always been that the available bandwidth in current day networks is far too little for the vast number of data packets of a video signal. Many compression techniques bridge the gaps and higher bandwidth networks are available, but the latter adds an extra zero to the price tag. Despite all these red flags, Video over IP technology adds a fresh new dynamic to video distribution systems by replacing the conventional video router or switcher with a network switch. True to IT network architecture, video inputs and outputs can be connected at any patch point across the network and routing takes place based on IP addresses. Because video signals can be distributed as IP packets, wireless systems are also seeing the light of day - but many creases still need to be ironed out. It will happen eventually, it’s just not possible to say exactly when.

Video distribution, from small to large systems, may have given consultants and technicians many a headache in this exciting industry. But, luckily, almost always with a happy ending.

Monday, 18 September 2017

Ever-changing technology and the AV professional

click here to read the original article in ProSystems Africa News July/August 2017

Technology is a major dynamic of the 21st century. For the audiovisual professional, the challenge is the crazy pace of communication technology development around audiovisual applications and how this impacts the men and women specialising in this vibrant industry.

The modern world’s digital systems will never completely substitute its analogue predecessors. The reason, as mentioned in a previous article, is simple – humans are analogue beings. Digitisation, however, is considered to be a breakthrough and technologically superior to conventional analogue ways. This is not entirely true and many old-school technicians and audiophiles will gladly debate the
difference in quality between analogue audio, for example, compared to the compressed digital versions. The reality is that analogue and digital signals are applied differently from one another.Human communication is reliant on the analogue spectrum. Our eyes see light through an optical system that distinguishes between different colours in various shapes and sizes. Our vocal cords produce sound waves which we manipulate with the cavity of our mouths, throats and lips. We do this in order to produce a compilation of sounds in concert, to deliver a message. Our hearing systems identify audible vibrations, which are used to recognise a message. None of the above systems are digital – humans cannot comprehend digital signals.

Digital signal technology pertaining to audiovisual applications was developed to overcome the challenges related to analogue signal distribution, duplication and storage. This is due to physical elements which cause analogue signals to lose energy or be absorbed by noise all around us and thus become unidentifiable by a recipient. In response, the architecture of digital signals are designed to only have two states, on/off, yes/no or more realistically, ones and zeroes. If only two states need to be transmitted, it can easily be done with a positive and negative polarity or amplitude of an analogue wave. The wave will erode over distance, as any analogue wave would, but as long as the receiving end is able to identify whether the wave is positive or negative, no information is lost and by means of an analogue to digital converter, the entire wave can be reproduced. The challenge (and this is where the audiophiles will have you for breakfast) arises in the level of detail that gets transmitted.

So where do humans fit in? It is common knowledge that technology replaces the human element in many fields on a daily basis and the  industry as a whole changes with technology developments. Due to the cost of technology decreasing by the day, consumer patterns are changing and more people are exposed to, or are using audiovisual technology on a daily basis in residential or corporate environments. Digital processes simplify operations and as a result, the professional skillset is perceived to no longer be required in order to perform basic duties. However, when it comes to customised, high-end audiovisual applications, experience and skills are what raise the bar.

Within audiovisual system designs and applications, there will always be a requirement for specialised knowledge. That means someone with experience and relevant application logic that can assess a specific customer requirement and respond with an adequate design, as well as not exceeding said requirement or the available budget. Apart from the design element, the implementation needs to be executed by a technical professional that can accurately configure and/or programme products and
systems. This technician should be able to identify possible challenges or faults and provide solutions when required. Most senior professionals within the industry are very capable but are facing the challenge of adapting to and maintaining knowledge in an ever developing industry. It will never cease to grow and the sooner one gets involved the easier it is to keep up. On the opposite end of the spectrum, the younger generation has some challenges of its own. They are entering an industry that is already at full pace with digital technology. Because of the technology driven lifestyles of the modern era, young technicians will easily grasp modern audiovisual processes. The difficulty they face however, is to fully comprehend the concept of digital video and audio for example, without understanding the analogue components thereof. The same challenge applies to the understanding of digital signals such as DVI, SDI and HDMI without knowledge and experience of Composite and Component Video, S-Video and 5-cable RGBHV. The in-depth theory of these can be explained a million times but the practical exposure will be minimal or none.

To add to the above challenge, audiovisual training material/curriculums are largely excluded from the average university arsenal. A graduate with a communications degree could add much value to the industry on a certain level but will be an expensive resource for one and may not know how to perform basic functions such as calculate the screen size of a custom auditorium for example, or overcome challenges pertaining to HDMI signal distribution. In the AV industry there are a shortage of training programmes. Many industry stakeholders such as product manufacturers are offering training courses in order to transfer knowledge and skill but these are mostly biased and likely to form part of a marketing strategy to promote their products and inflate sales.


This is where the global institution InfoComm made a name. Locally, a professional body, SACIA (Southern African Communications Industry Association), fulfils this role. Although SACIA represents many sectors in the communications industry, its education offering pertaining to professional AV is of excellent value to the aspiring young AV professional.

SACIA was established by a number of industry pioneers with an objective to establish a member base that would promote the adoption of professional standards and ethical business practices. SACIA offers a range of professional designations for various sectors of the communications industry. SACIA is recognised by SAQA (South African Qualifications Authority), as the certifying body for the AV/Communications sector and operates under regulations defined within the National Qualifications Framework Act. The organisation is responsible for developing and awarding professional designations that recognise competence in the communications industry. SAQA is an organ of state, designed to oversee the further development and implementation of government’s education policy and therefore the professional designations offered by SACIA, are widely recognised. All the SACIA professional designations are listed on the NQF (National Qualifications Framework) of South Africa. SACIA professional designations are also recognised by all SADC (Southern Africa Development Community) regions and are listed on the RQF (Regional Qualifications Framework). In order to maintain a professional designation, individuals need to participate in a programme of continuing professional development so that any individual with a designation stays up-to-date on the latest technology shaping the future of the AV industry.

SACIA is also promoting transformation in the industry and does not exclude any individual who wishes to partake in training and development. Thanks to SACIA, school leavers now have additional
options to confuse them and a great opportunity to become part of this exciting industry filled with toys and technology.

Wednesday, 17 May 2017

Sending Successful Video Signals

]
Click here to read the original article in ProSystems News Africa 

As consumers living in the 21st century, we are almost constantly surrounded by video displays. Video has become an extremely popular and effective communication platform and is widely used for a large variety of purposes. Apart from cinemas and television broadcasts, people share many things from news, social media content and product marketing to showcasing talent. Whether the talent is amateur or professional, the same video platform is utilised. The consuming audience experiences all this by watching a video screen showing video content, but behind the scenes, from a professional AV perspective, it’s an entirely different ball game.

Every video system has three elementary components which comprise the system. There are the source devices, the sync devices and the connection between the two endpoints to complete the link.A source device is similar to the ‘start’ point of the particular system’s signal. This is the device that outputs or produces the video signal and it could be various different components. The most common examples are DVD or media players, personal computers and computer servers with video output ports. Another popular example of a source device is a video camera, producing and transmitting a video signal. The sync devices form the other end of the line and they are the video screens on which the playing video file is viewed. Multiple technologies are used to display video such as television screens, modular LED panels (digital billboards or big screens at sport events and concerts) and video projectors which project light onto a reflective display surface. The latter is very common in presentation venues, cinemas, auditoriums and concert stages. The connection between these endpoints is where the challenge lies.

At home it might seem like a really simple task to connect a DVD player to a television with an HDMI cable and Bob, as they say, is your uncle. In the professional video industry, systems can be a lot more complex depending on the application. Some systems require more than one video source to be routed to a single display as used in boardrooms with multiple connection points. Others require a single source to be distributed to multiple display screens similar to systems in airports and shopping centres. Apart from these there are systems that require both of the above in one solution, i.e. multiple sources to be routed to multiple displays at any desired configuration. This also requires the functionality to manipulate a current configuration and reroute any source to any screen or screens, with the press of a button. The above challenges are all doable as equipment is readily available to meet these needs as long as the video designer understands the system architecture. Accurate design will ensure that the correct components are used and configured accordingly. The users can then take charge and create configurations however they desire. As mentioned, the real challenges lie in the connection between source components and display screens. Systems such as these are mostly based on unique requirements and are designed to provide custom solutions in response. Therefore the challenges that accompany signal distribution are not exactly black and white.

Video signals are based on analogue waves which are transmitted over a specific medium, such as copper and fibre cables. Depending on the format, and the resolution of the video signal being sent, there are distance limitations. Over longer distances, the physical environment causes signals to weaken until they are no longer viable for video reproduction. Within analogue signals such as RGBHV (Computer graphics or VGA) or composite video (standard definition video from a player or camera) for example, the amplitude of the physical wave is what is important. A transmitted signal’s energy will decline as it travels further along a cable. An additional challenge is the fact that radio waves constantly exist in the air all around us, which then interfere with the cable and transmitted signal. This is called radio noise and although this noise is mostly fairly weak, it does cause interference. As radio noise exists everywhere, the interference happens across the entire length of a cable and the noise is continuously present anywhere on that cable. This phenomenon is known as the noise floor of a radio signal.

Coaxial cables, used for analogue signal distribution, are designed with a thick copper core and a foil or wire strand braiding around the outer diameter. This cable design functions as a Faraday cage and effectively reduces noise interference. The noise, however cannot be eliminated entirely. Thus from the available amplitude of the original signal, one can only utilise the section superior to the noise floor. This section is known as the signal-to-noise-ratio and needs to be sufficient enough in order for the video to be reproduced. Analogue signals can be amplified to increase amplitude or rather increased wave size. This is done in an attempt to have the waves travel longer distances. In some circumstances, the amplifying component cannot distinguish between the video signal and the noise floor and as a result it amplifies all the waves present on the cable and produces an increase in signal amplitude as well as an equal increase in the noise floor. The result is that the signal-to-noise-ratio remains unchanged. Due to this phenomenon, successful amplification needs to be applied nearer to the source where the signal-to-noise-ratio is still sufficient. Once it is no longer adequate, the signal will be present but not clear and would not suffice for video reproduction.

Certain applications require video transmission over such large distances that constant amplification fails to qualify as a viable solution. Balun technology was developed to extend signal ranges. The architecture requires a transmitter and receiver with a CAT cable in between. Baluns – a name derived from the terms balanced and unbalanced – uses technology similar to balanced audio systems where the video signal is duplicated at the transmitter end and both signals are sent over multiple twisted pair cables to the receiver end. The only difference between the two is that the second signal’s frequency is inversed to a negative version of the first. The noise floor, however, cannot be manipulated and thus remains constant and equal on both signals. The receiver end then collects from the two signal feeds, a positive video signal, a negative video signal, and two X positive noise floors signals. All of these are compared and all matching signals – in this case only the noise – gets eliminated, leaving only the two X video signals of which the positive one is used, noise free.

In the digital world things work a bit differently. Digital video is not the only component to be distributed. Many formats exist such as DVI and SDI to cater for different applications but HDMI is the most common signal for High Definition digital video distribution. However it’s not video alone but rather a multi-media format which hosts a collection of additional signals such as audio feeds, power, Ethernet, EDID (Extended Display Identification Data) which is used to automatically identify a matching, optimum resolution between a source and a display, and lastly HDCP (High-definition Digital Content Protection), a protocol initiated by film producers which prohibits end users from duplicating copyrighted content. This is a large amount of information to be transported and the distance limitations are much bigger than with analogue video.

A digital video signal is still an analogue wave but the polarity is manipulated to be either positive or negative. A positive wave represents a 1 and a negative wave represents a 0. In a previous article I explained how the analogue metric information is converted to a binary string. Binary numbers consist out of only 1s and 0s and the strings can thus be sent over larger distances. The receiving end collects the strings with 1s and 0s and reconstructs the information prior to being converted back to analogue signals, for human beings to comprehend. The noise floor is still present but the signal-to-noise ratio is irrelevant as the receiver only needs to establish whether a wave’s amplitude is positive or negative in order to identify the strings.Thus no information is lost. Unfortunately though as soon as the signal is interrupted or becomes too weak and it loses as much as a single digit, the strings become corrupted and initial information cannot be established, nor reproduced. The entire signal is then obsolete. In other words, a digital signal works 100% or 0%. There is no in-between. This is known as the ‘cliff-effect’.

In spite of this amazing technology, HDMI is still limited and as soon as a cable exceeds 15m, an alternative solution is required to distribute the signal. The beauty of these challenges within the current technologies is that they receive a lot of attention and funding. Solution focused technologies are constantly seeing the light of day. Range extenders similar to baluns exist and are a popular solution. A few years back, HDBaseT was introduced to distribute HD digital video. HDbaseT uses high quality Cat5e and up to distribute media signals, control signals and electricity to power equipment (depending on their consumption) all over one cable. HDBaseT technology was really well received and many display manufacturers invested to get their products to accept HDBaseT as an input, thus eliminating the receiving component of the HDBaseT distribution system. A further popular way to extend digital signals is to use fibre systems instead of copper. Fibre transmitters send light along a fibre pipe or cable at much longer distances than copper because of the lower resistance. The same strings containing 1s and 0s are sent and the same principle applies at the receiver end.
The latest and greatest form of distributing video (and media signals) is to convert them to IP packets and distribute them over an IP network. The world is full of IP networks and in principle it makes sense to simply connect to a network and introduce the media signal to the network and then collect it anywhere else on the same network, or even at multiple places, at different times. This great idea was halted as soon as the bandwidth requirements became evident. Video has been streamed over networks for many years but it has had to be compressed to such a low quality in order to make it viable to send and download. Like anything in today’s world the technology caught on and video over IP is becoming more and more available. Technology even exists to distribute 4K resolution uncompressed, that still supports HDCP. Due to of the massive amounts of bandwidth required, the network limitations needs to be 10Gbps – which is not excessive anymore. Slowly but surely the IT world will upgrade networks to 10Gbps. The only other obstacle for video over IP systems is to convince the IT managers and network-security teams. The latter might be the biggest challenge but it will have to be addressed.


Video distribution has come a long and innovative way to where it is today. The current pace of technology development is unlikely to slow down. The near future is leaning towards integrating into existing IT technology which will then form the backbone to all our communication needs. But then again, who knows what the future holds?

Tuesday, 28 March 2017

Selecting The Right Projector



Projector selection in today’s world can be a confusing and time consuming exercise. There are many different models and specifications available and purchasing the correct unit for a specific application, is not as easy as it seems.

Projectors are distinguished by various specifications. Normally, the key aspect around projector selection – apart from budget – is the brightness output; which although essential is not the only deciding factor. Other distinctive specifications include: resolution, imaging technology, aspect ratio, light source, and of course, budget! These are the variables that need to be considered, based on a specific application. When regularly designing audio visual systems, one develops a ‘feel’ for an average ‘go to’ projector within budget which will be sufficient for most solutions. In the interest of quick effective design, this is acceptable, but ideally one should select the perfect projector for every custom solution.

Aspect Ratio
In my opinion as an audio-visual professional, the first element that should be looked at, irrespective of the rest of the solution, is the choice between aspect ratios. Traditional 4:3 projectors are still widely used today, partly because they are cost-effective in our challenging economic times. When budgets are cut, an easy method of saving is to select a XGA or even lower resolution projector with a 4:3 aspect ratio. Another appealing way to save money when upgrading a current system, is to use the existing 4:3 aspect ratio screens and then match a projector to said screen. Although this is an option, it is still a concern from a professional perspective. New designs should be based on modern wide screen solutions. The reason to stay away from 4:3 aspect ratio projectors, is the low resolution. If Full HD signal is displayed on an XGA projector, the pixel count reduces by roughly 45% – which essentially means the end user would sacrifice almost half of the image quality. If a client refuses to upgrade their existing 4:3 screens, an interim solution would be to use a 16:9 or 16:10 projector, even with a lower resolution, on said screens until the screens can be upgraded accordingly. The result will be a wide image, failing to fill the top and bottom sections of the screen which is not ideal but adequate for the time being. Even if a 4:3 projector is installed to match the screen and a presentation from a modern wide screen laptop is displayed, the result will essentially be the same.

Brightness
The standard measurement for projector brightness is lumens according to the American National Standards Institute (ANSI). 1x lumen, is also known as 1x foot-candle or more commonly explained as the amount of light measured at a distance of 1 foot away from 1 candle. Projectors are available in different levels of brightness for a multitude of different uses. In a home theatre set up, a 2000 ANSI lumens projector might be more than sufficient because of the low-light environment, but for a big outdoor concert, one would require in the region of 10 000 to 15 000 lumens or even higher depending on the distance between the projector and the screen surface. A large corporate boardroom or small auditorium will require anything around 4000 to 5000 ANSI lumens. Projector selection in today’s world can be a confusing and time consuming exercise. There are many different models and specifications available and purchasing the correct unit for a specific application, is not as easy as it seems.

Resolution
The native resolution of a projector is the optimum amount of physical pixels that the projector is able to display. Almost all projectors have scaling technology built in to them; which means the projector can accept video signals of various different resolutions which will be scaled and adapted to match the native resolution of the projector. This feature however is misunderstood by consumers, and even certain resellers lacking the correct knowledge, who could advise consumers on an incorrect product. They may for example propose an attractively priced XGA projector – with only 1024 x 768 physical pixels – to a client and mention the projector’s capability of processing WUXGA signals of up to 2.3 million pixels. This is correct. However what it actually means, is that the projector will reduce the original 2.3 million pixels to the available 786 432 pixels, thereby reducing image quality by almost 66%. If the entire video chain (camera and distribution equipment) is based on WUXGA, it would prove useless if the display element only produces XGA resolution. A boardroom screen displaying spreadsheets, or a movie theatre for example, would require higher resolutions. In a church or auditorium setting where the viewers are further away from the screens and the majority of the content is graphics; a standard WXGA would be adequate. Higher brightness should be a priority. In areas  where displays are used for inspection or to show detail, or where exceptionally large screens are used, UHD or 4K would deliver the best results if the budget allows for it.

Imaging Technology
Few consumers are concerned as to whether a projector is LCD (Liquid Crystal Display) or DLP (Digital Light Processing) and many don’t even know the difference. Apart from the above technologies, there are also LCOS (Liquid Crystals on Silicone) as well as laser phosphor DLP. The latter is slightly different to conventional DLP. These technologies have advantages and disadvantages which are mostly irrelevant but it makes them more effective for certain applications. One of the downsides to DLP is the rainbow effect. Because of the architecture of DLP technology and the pulses of different colours of light, a multicolour line could appear for a split second between dark and light areas. This could be disruptive in a cinema or home theatre environment especially where there is a minimal amount of ambient light. A positive side to DLP is that more brightness is achieved with higher resolutions because of the DMD (Digital Micro-mirror Device) chip. LCD projectors (depending on the manufacturer) are much better at producing accurate colour and for this reason, LCD projectors are mostly suggested for clients needing to do design proposals, or display colour-detailed content for inspection purposes. A negative of LCD is the lower contrast, which is caused by light bleeding through the micro space between the pixels.

Light Source
Conventional projectors mostly utilise UHP (Ultra High Pressure) and HID (High Intensity Discharge) lamps. These lamps were perfectly adequate for their original purpose even though they were not energy efficient and resulted in high thermal emissions when in use. Due to the fact that they were not energy efficient, they also did not have a very long lifespan (2000 – 4000 hours depending on model) and are quite costly to replace. Newer technologies use solid state lighting such as LED and laser phosphor. The LED projectors were the first to surface in the market and were hugely popular because of the low energy consumption and the extended lamp life (up to 20 000 hours). However, they lacked brightness and could not adequately replace their predecessors. Laser phosphor is the newest kid on the block which delivers higher brightness as well as longevity. This is definitely the way forward albeit more costly to acquire. Due to the fact that the product has a lifelong, near maintenance free lifespan, it is a much better investment long-term.
Ultimately, the customer needs a working solution and many projectors will provide a sufficient result for a variety of different applications. However, selecting the correct projector for a specific application will deliver optimum results and exceed customer expectations.

Tuesday, 28 February 2017

The Pixel Age


The most common and primitive method of video signal transmission is between any illuminated object and the human eye. Light from the sun, or any alternative source, reflects off an object towards us at, well, the speed of light. In today’s digital world, littered with video streaming wherever we turn, things happen, rather, at the speed of data packets.

Analogue signals will never be conquered by the digital world. This is based on pure physics. Analogue light and sound waves exist all around us, mostly without human interference. Another reason why we’ll always need the analogue spectrum is because humans are analogue beings. Our eyes and ears receive analogue waves in order to see and comprehend, and our vocal cords produce analogue vibrations for us to be audible. In fact, any digital communication device in the modern world requires A-D (Analogue to Digital) and D-A conversion at either ends of the system to make it usable for human beings. The digital part is purely the technology used to transport the information between end-points without quality loss.

Media streaming is nothing new. The most prevalent example of streamed video and audio is probably from the popular website, YouTube. Many other web services are also using the World Wide Web to distribute video content. Video streaming is however not limited to internet connectivity, and there are many applications which require video distribution over Local Area Networks (LAN) or Wide Area Networks (WAN) in an IP, point-to-point, or multipoint network. The conventional way to distribute video is over copper-cabled systems. From elementary Radio Frequency (RF) networks, to higher resolution video formats such as RGBHV (Red, Green, Blue, Horizontal, Vertical), commonly (and incorrectly) termed VGA signals. When digital video technologies surfaced, they brought along High Definition (HD) resolutions and formats such as HDMI (High Definition Media Interface), DVI (Digital Video Interface), Display Port and SDI (Serial Digital Interface), with the latter still being distributed on RG59 coaxial cable. Parallel to the digital video technology development, Information Technology (IT) also took off at a breakneck pace.
When an image is displayed optically with an overhead or slide projector, for example, the image is created from light projected through a filmstrip and a lens onto a distant surface, which then magnifies every little bit of detail showing perfect lines and curves. In order to reproduce the same image digitally, one would have to subdivide the entire frame in tiny blocks (pixels) and colour them individually to form a pixelated image. The more pixels used in the frame, the better the image quality will appear, but theoretically it’s impossible to produce a perfect curve by using square pixels. Even when the pixels are so tiny that the human eye cannot differentiate between them, a curve may appear smooth, but, in fact, it will always consist of tiny squares. Therefore, higher definitions are the best way to produce quality digital video images.
In order to convert the information of one digital pixel, the analogue wave is plotted on an X and Y axis and broken up into various samples, at a specific sample rate. The metric coordinates of each sample are then converted to a binary system – a numerical system that only utilises ones and noughts and most commonly uses eight digits simultaneously to represent any number between nought and 256. Each digit is known as a ‘bit’, and in the most generally used 8bit networks, eight bits equal one ‘byte’. The high number of pixels in an image, along with other relevant information, thus results in a large amount of data packets – kilobytes and megabytes. Moving video is made up of a series of still images which is displayed in a quick sequence to create the illusion of a moving object. Standard formats use 24, 25 and 30 Frames per Second (FPS). The amount of data packets for one still image thus needs to be multiplied by the refresh rate (FPS), which will suggest the bandwidth required to transmit the video signal in the IP network for every second that it’s playing – for example in Mbps (Megabytes per second).
Once a signal is digital, the challenge is that the bandwidth required in HD video signals exceeds the capacity of most commonly used IP networks. These networks are adequate for basic networking requirements, but video streaming will consume the available data flow and congest the entire network, making it dormant for any of the users. Higher capacity networks are available, but at an inflated cost, which is difficult to justify for video streaming only, if not purposely required.
Video Compression
This brings us to the reason why certain video signals are being compressed for IP distribution. Many different compression formats are available and currently in use. These are divided into lossless and lossy codecs. Many video codecs are necessarily lossy, simply because of the idea of eliminating information to reduce bandwidth. Lossy codecs compress video based on many algorithms. Basic lossy codecs are throwing away data at regular intervals, which is effective to reduce bandwidth, but may result in a much lower image quality. Another effective lossy compression format is based on an analysis of the nature of human vision, which then dismisses excess information that the human eye would find visually redundant, such as close colour variances. Human vision is much more sensitive to brightness (luma) than colour (chroma) and would thus not distinguish between pixels of closely relevant colour. The third is called ‘chroma subsampling’, which reduces colour space information at regular intervals. Colour sample ratios are often seen as 4:2:2, 4:2:0, 4:1:1 and many similar ratios, which indicate the changes between the chroma samples from one row to the next.
Lossless codecs, as the name indicates, do not discard any information and thus compress video signals by preventing duplicate pixel information from being transmitted. Duplicate pixels exist where large areas in an image are made up of the same pixel information, such as a large background of the same colour with little motion, or adjacent video frames from long scenes in a movie or fixed film sets such as filmed interviews. These will result in many frames having similar pixel information. In these frames, only the changing pixels will be transmitted. Another format of video compression is to group average pixel values together, where several pixels are averaged out into one large rectangular pixel of the same value.
Most of these systems allow the user to adjust variable settings prior to compression, such as a specific resolution, frame rate reduction and bit rate to be preconfigured for maximum quality versus maximum compression. Multiple formats have been developed over the years. JPEG, HDV and MPEG-2 are examples of lossy compression, where the latter compresses data over multiple frames (interframe) instead of individual frames (intraframe). Mpeg4000 and its version 10 (H.264) are some of the most popular compression formats used in current systems.
Uncompressed video is also seeing the light of day, but in order to distribute uncompressed video one would need a network with the required capacity, such as 10Gbps and who knows what else the future holds. All that we know for now, is that the future of video distribution lies in IP systems.