YouTube Live and Facebook Live, the biggest players make huge mistakes

I have been into live video streaming on the web long before YouTube Live opened this feature for the wide audience.  By simply working directly with my own video players and media streaming servers.

In present day, my specialisation encompasses live streaming to Facebook Live and YouTube Live, involving the use of API of these two large platforms, de facto world standards of the modern video content delivery.

While watching admirably at the success of these giants, I cannot understand why do there still so many technical issues exist, when one attempts to stream their event via one of these social networks.

You could use any of the ways, either an API method for professionals and television channels, or a simple and sought-to-be convenient way – a graphical user interface, or directly your web-browser.

Let us take a closer look at the streaming at YouTube via a GUI-control:

The streaming normally starts at your channel’s Creator’s studio, where you can create a new video event and grab a broadcast key to perform your worldwide show.

YouTube Live will suggest that you switch to a new interface, which is already for a long time in beta. 

This is something you do not want to do under any circumstances. Once in it, you are going to be completely lost, and the functionality to broadcast via an encoder (like Open Broadcaster) will be unavailable there. 

That said, every time you select the encoder way, the system will kick you back to the old interface. 

Broadcasting with a built-in or a USB webcam has been made easier, but YouTube is a professional platform, where users prefer to have a choice to deliver a high-quality content via external encoders.

Let us imagine we wish to use an external encoder, the most ancient and reliable way to provide a stream to YouTube. Having gone through the event creation and configuration, especially on serious YouTube Live channels, you expect to see a preview of your stream.

Well, no. You have a preview player, which does not work, until you transition your stream into the preview (test) mode. Quite a design flaw of a UI, where something, that is already displayed and seemingly available, is not working. 

Okay, but this has been so since years, let us call it a tradition. 

My next click goes to a button “Stream preview” and the system informs, that it prepares a preview stream for me. The button changes its text to “Stream live” now, but wait. We are on the preview page in the respective mode, but a preview player still throws an error, the identical one to that before switching to this specially-designed mode.

I must admit, for all these years since the start of YouTube Live platform, I have got so much used to the situation, that I have forgotten that YouTube can have a stream preview mode, despite it is documented everywhere and even present like a fake feature of the interface.

Why such a flaw of otherwise popular platform – I do not know. A few years ago there appeared a new option, called “Camera” (though I’d rather prefer them to fix the old method). This one works the way I use the other one – no preview, immediately whatever you stream to YouTube is pushed to live. I have no idea, why YouTube has such a bad relationship with preview streams.

If you are as advanced with YouTube as I am, you might suppose that using an API will help you out, but no luck. Would it be so, I wouldn’t have written this article.

In terms of the preview, you will not be able to fetch the preview player via an API as well, so until you start a live broadcast, there is no way for you to reliably know if your stream is fine. Just hope for good.

Other than that, the API works reliably and allows to effectively omit the YouTube interface in most cases.

Now Facebook

Facebook Live is much younger, than YouTube’s. However, it is already widespread. 

A normal user of a modern social video broadcasting platform is used to create a video broadcast, go in there, copy a streaming key, paste it to their video encoder and enjoy.

What if you are tired to reconfigure the encoder every time, because you broadcast a little bit more often? There is a solution – it is called a persistent streaming key. Once selected, the key is immutable per Facebook page or per user account, and one can set it in their video encoder for a lifetime.

At least this scenario is realistic for YouTube. 

For Facebook, this feature, despite being present for already almost a year, does not work properly. The persistent key often de-selects itself even when you edit a video, change a description, or just so. 

If you create multiple streams with the pre-assigned persistent key, this will drive Facebook crazy, and it will not know where to broadcast to. Despite it is just as easy as broadcasting to the player, that comes first according to the planned event’s time. With due diligence it is possible to always check if the key was not gone yet and re-tick it back, but having about three live events per day renders this method useless due to mentioned self-determination bugs.

In my situation, I use variable streaming keys by automating their feeding via an API, because setting a persistent key over an API is… impossible. Right, a functionality, meant for professional

broadcasters, is completely flawed and useless. 

Other limitations of the API are – you cannot edit the description or title of the video, just a visit to Facebook UI can get it done for you. You are unable to upload a slate (preview image of your video) over an API – again a visit to FB is a must.

At least where API takes over – is creating “invisible” videos, that become published automatically with the first bits of an incoming stream. This magic can’t be achieved over the interface.

Normally, you do not expect these things from such over-financed and glorified companies.

For my needs I have created a lot of workarounds, that allow to automate the broadcasting as much as possible anyway, even where it seems to be not designed for the job.

I really wonder how large studios overcome these challenges and why it cannot be solved by the big players for years.

How to de-brick a hard-bricked router in 2019 (on the example of Asus RT-N16)

There are articles on this topic, but I have decided to summarise everything into one, according to the tools we have in 2019 for the job. Provided you have the means mentioned, the task should take you up to 5 hours or less to get running again. It required however around 5 days for me to get to it from scratch.

If your router is bricked, you cannot recover it using any technique, provided by a manufacturer, chances are that it is hard-bricked. In my case the power light was not going on, just network port lights. No serial console output at all. No recovery method of 30/30/30 would work.

In this case you have a chance to de-brick the device using JTAG. To access this feature, you will need:

  1. Raspberry Pi (our example showcases Raspberry PI 3 B);
  2. Jumpers or an IDE bus from your old computer’s IDE drives;
  3. A solder or spring-loaded connector with the wire to contact the printed board;
  4. A screwdriver to disassemble and reassemble the device;
  5. 6 100-Ohm resistors of about 1/8 Watt power;

First, let us open the router and locate a JTAG connector. You can find online where is JTAG for your specific router. Sometimes you should solder to different places on the motherboard to collect its signals.

In case of the famous RT-N16, the connector is in one place, placed conveniently, it is marked as J1.

Now choose the means to make a connection. In my case, I did not have a spring loaded connector handy, so decided to go for a soldering solution. The board is based on an aluminium sheet, so tends to get hot whenever you solder, be careful.

Remember, JTAG does not tolerate long cables, so try to limit yourself to 20 sm. In case you use already a 20-30 sm. IDE bus, the solution will still work even if you attach another 10 sm. cable /jumper wires, leading to the board. Maybe the reason is IDE bus has all wires inline and they do not interfere much.

The signals of the connector are on the left, whereas ground sits on the right. First I was wondering if I should solder the ground to one pin and connect all ground pins additionally, but in fact you need to only solder to a single ground pin. All the rest are already connected by the aluminium board, using common ground.

Another important thing would be to turn the router on and check voltage level on soldered/connected wires. It should show 3.3V on most pins, occasionally one pin can show 2.7V and the nSRST is usually around 0V.

If you see lower or jumping values, be sure to check your soldering or quality of the JTAG connection. I have managed to get a proper connection only from the third try, despite all looked properly attached.

If you have managed to reach that far, you are very close to the result.

Connect Raspberry PI 3 B:

  1. Via IDE cable

Make sure your IDE cable looks inwards on the 40-pin connector of PI, rather than outwards. If it looks outwards, all your header pins will be mirrored left to right and you will have to account for that. You can place resistors into the pin holes on the other end of IDE. Pliers can help to bend the leads of resistors and compress them into a sicker wire for a proper contact.

     2. Via jumper wires;

A 100 Ohm resistor must be connected sequentially to every signal, except for the ground. You can test the pin numbers via a LED light script of Raspberry.

Next, compile openocd utility on raspberry. Installing by apt will not help, because the utility will lack functionality we require here:

sudo apt-get update
sudo apt-get install -y git autoconf libtool libftdi-dev libusb-1.0-0-dev
mkdir -p ~/src; cd ~/src
git clone --recursive git://git.code.sf.net/p/openocd/code openocd-git
cd openocd-git
./bootstrap && \
./configure --enable-sysfsgpio \
--enable-maintainer-mode \
--disable-werror \
--enable-ftdi \
--enable-ep93xx \
--enable-at91rm9200 \
--enable-usbprog \
--enable-presto_libftdi \
--enable-jlink \
--enable-vsllink \
--enable-rlink \
--enable-arm-jtag-ew \
--enable-dummy \
--enable-buspirate \
--enable-ulink \
--enable-usb_blaster_libftdi \
--prefix=/usr \
&&
make
&&
make install

Pin numbers you have to connect will be mentioned in openocd config file:

/usr/share/openocd/scripts/interface/sysfsgpio-raspberrypi.cfg

for convenience I show them here.

RPI HEADER              JTAG CONNECTOR

6 GROUND                GROUND (one of right pins)

19                                 TDI

21                                 TDO

22                                 TMS

23                                 TCK

26                                 nTRST

 

At this point you can list the partitions on the router, dump or delete them. I recommend to dump and store your CFE just in case (do it twice to ensure there is no error in final files).

To dump CFE (a “BIOS” of router):

cd /usr/share/openocd/scripts; sudo openocd -f interface/sysfsgpio-raspberrypi.cfg -f tools/firmware-recovery.tcl -c "board asus-rt-n16; dump_part CFE /root/cfe.0.bin; shutdown"

To list partitions:

sudo openocd -f interface/sysfsgpio-raspberrypi.cfg -f tools/firmware-recovery.tcl -c "board asus-rt-n16; list_partitions; shutdown"

To de-brick:

sudo openocd -f interface/sysfsgpio-raspberrypi.cfg -f tools/firmware-recovery.tcl -c "board asus-rt-n16; erase_part nvram; shutdown"

Usually to de-brick a hard-bricked router, only NVRAM partition has to be erased. Then, after a power-cycle, your router will be ready for a firmware flash.

I have done it using a tftp linux utility. For the moment, the best supported 3rd party firmware for RT-N16 I found, was AdvancedTomato. But if you go for a stock firmware, it is also not that bad nowadays.

In case anything goes wrong, just double- and triple-check your contacts and voltage levels on JTAG.

How do I autostart VMWare ESXi 6.0-6.7 virtual machines?

The task has taken me a year to figure out, so I would like to share tips with you, in case you run into the problem and come across this article, searching for it.

Had you ever used VMWare’s ESXi, you would know, that versions, prior to ESXI 6.0 were managed by a Windows vSphere Client and everything was working properly.

But that was not convenient for everyone, to keep a Windows machine handy just to log in into their VMWare’s control panel. If you, like me, were one of those, or just lucky to start your journey with VMWare from version ESXi 6.0, you could have noticed, that the application refuses to support versions over 6.0. Therefore you can use a Host Client (which has a web-interface, instead).

The story would have ended here, although not this time. Once you set everything up and have your virtual machines happily running, you will be hardly pressed to find how to make them start, when the system is power-cycled.

In case you migrate from the older VMWare and provided some luck, your machines will continue to auto-start. But what to do with the new machines, created actually with ESXi 6.0+? After every power-cycle they will just stay off.

In the documentation all over the Internet you can find, that auto starting is controlled by a setting of priority for your VMs. Unfortunately, this setting does not work.

Before ESXi 6.0 you could explicitly set a boot-up flag in the Windows vSphere Client, so how to solve it now?

If you search, you will not find it, but I have come across the solution completely by an accident.

There is a utility, which is meant for running virtual machines by VMWare on Linux or Windows, the name of this utility is: VMWare Workstation Pro. It is commercial, but you can try it for free for 30 days.

– Use it to connect to your system, using “Connect to a Remote Server” button on the initial screen.

– Right click on the host’s IP or hostname in the left pane after a successful connection, and select – VM power actions,

– You will see “Auto Start” flags, which can be set in front of every virtual machine.

By the way, VMWare Workstation Pro does not provide many features, which are there in the web-based Host Client. So, strangely just one option, which is not in the web-interface, is here and 80% of other options are in the web-interface.

I do not propose to install or use VMWare Workstation Pro, I just demonstrate, that there exists a hidden mechanism, not present in the Host Client’s interface, which can be triggered for example this way.

Please endorse and share this article, if it had helped you to get to know VMWare ESXi better.

Web presence – challenges and solutions

Modern companies require a good plan to maintain their web presence.

Whether it is a corporate image representation, a marketing campaign, an E-Commerce resource, or an e-business on its own, the importance of having a proper Internet infrastructure in terms of stability, performance and cost efficiency is hard to overestimate.

If it comes to E-Commerce or an Internet-based business, InfoSec (Information Security) is another important aspect, that ensures the data of your customers is kept private, excluding expensive lawsuits.

Today the biggest players on the market inevitably utilise site protection firewalls from centralised suppliers, sign up for DDOS protection measures (like IP traffic filtering), use multi-layered defence measures.

Virtual private networks have become a must, be it a site to site or an SSL-based VPN, it is impossible to imagine any sensitive administrative access without one.

Just a few years back one would only protect web-pages, which include a login form, with HTTPS. Today presence of HTTPS has become a ranking factor in Google, it is recommended for the use on every page, and HTTP is discouraged. Now SSL encryption is available even to small low-cost web-sites, while even larger market players sometimes rely on LetsEncrypt free service.

An improvement to HTTP1/1 protocol (HTTP/2) does not even function in web-browsers, while on HTTP connection. With the increase of tracking tools across the web, every request, sent by a browser over an unencrypted connection is deemed to leak sensitive information about user behaviour.

Changing your password once in a while could be a good strategy back in the days. Presently it is not enough to assure having a long pass phrase, but also necessary to implement mechanisms at your enterprise, which would force a policy of password modification every 90 days as the latest.

Does your company hold regular security scans of its web site? If not, you need to hurry-up. Scans guarantee to find vulnerabilities in the code of your web-resource, and manual post-checks ensure nothing obvious was omitted.

A report provided will let your software engineers correct discovered inconsistencies to protect valuable data and general image.

How does your web-resource handle errors? In case something happens to the database or any critical part of the web-site? Remember, no information about internal paths, user/password credentials for the database should be leaked in this case.

Having someone deliberately provoking a failure of one of your site’s components could lead to an uncontrolled data leak. Always ensure, that your software engineers parse similar issues in a proper manner, showing a generic message.

Do you have a login form on your web-site? Then remember, you should never provide a hint, in case a username or a password are mis-entered. Neither a user should be aware, if the e-mail for password recovery exists in the database. Show only generic error messages and never hint about the data you handle, unless only the username and password are both entered correctly.

Should a weak password during the registration process be allowed for the convenience of visitors, do not forget a password strength meter. It is a good practice to consult users about their own risk they take, when providing a simple password.

In case you run a popular CMS, be it WordPress, Drupal or similar – think twice before leaving it unattended. Despite the stability, such CMSes owe their security exploits to their popularity. Very often new problems get discovered and you should be ready to update the system. If you plan for an unattended system, your best bet will be a custom framework.