Combining a proximity sensor with a RGB LED


The sensor and the RGB LED

The basic idea

Let’s say you want to have a device which can tell you in a simple way if something is dangerous. In the present case, let’s assume there is a very hot stove and you must avoid at all costs to touch it.  There are many ways to do it. In my case, as I am playing with  Arduino, the best way to see the shields and other components in action is to combine them in a useful way. The current project uses a proximity sensor, made by Sharp, a  common anode RGB LED and of course the classical Arduino board plus the breadboard.


The proximity sensor with the color-coded connectors

The sensor

The proximity sensor is 0A41SK (or GP2Y0A41SK0F). It comes with a connector which is color coded:

  • red –> 5Vcc
  • black –> GND
  • yellow — Analog pin 0

RGB LED in common anode configuration


The LED is not very hard to use, but mine didn’t lit and I learned about the two possible RGB LED configurations:

  • common anode
  • common cathode

In case you assume the first type and the LED doesn’t blink at all, you must conclude it is the opposite type. Such was my case. I was afraid the LED was broken, but  it functioned normally.


1K resistor


The resistors

I used three 1K resistors.  Surprisingly, while people over the internet say that the LED is poorly lit when combined with 1K resistors, in my was it was very bright. Maybe the common anode configuration is favoring this type of LED.

The schematics

The only particular aspect of the circuit was the use of the 5Vcc pin instead of the GND  for the RGB led. This requires an inversion of the values on the pins D5, D6, D7, but more of it later. As it can be seen in the picture, there are two distinct circuits that can be tested independently. In fact I used the half-circuits  before putting everything together.

The configuration of the LED

Due to the peculiarities of the LED, it is lit when there is a voltage between the 5Vcc pin and the pins D5, D6, D7. In other words, the three individual diodes that make up the LED are lit when the three pins are set on LOW. Similarly, the LED is unlit when the pins are set on HIGH.

The configuration of the sensor

The proximity sensor is connected to the A0 analog pin. AS I am using a WeMos card, it is the only analog pin and I am glad it is available on the board.

Playing with the values

The ADC converter returns a value between 0 and 1023, corresponding to a voltage between 0 and  5Vcc.  I used a mathematical formula in order to calculate the voltage. Now, there are many libraries that ca be used, but as I didn’t like the distanced produced, I preferred to do the job myself, so I wrote my own distance calculation  function, based on the plot tables provided by Sharp.  I will provide the code later, on GitHub.

The LED and the sensor

The main idea is  split the range of the sensor in three smaller sub-ranges and associate a color to each one of the:

  • 4 – 10 cm blue
  • 10 – 20 cm green
  • 20-30 cm red

if the object or the hand is at more than 30 cm (or a foot) from the sensor, we consider the stove is not harmful. However, if that is not the case, we have three intervals and three warning colors.  Each color is independent from the others.

IF the frequency of the loop is set to  say 50 ms, the whole system is very reactive to the movement of a hand.


I use a WeMos board. As many have pointed out, the Expressif chip has some current leakages.  in my case, the D5 pin, which is SCK and powers the red color,  is connected to an on board led.  I suspect there are some issues with it. Any way, the red color indicates a capacitor that needs to recharge periodically.  Anyway, I am happy with my WeMos board. I have already tested several interesting configurations.

AS a final remark, when I tried to use other pins than D5, the led stayed unlit. It might have an issue with the current available to the board. There are plenty of forums and  groups out there and I will dig out this.


Speech recognition and the real world


The margin of error is shrinking down


A recent article  from GeekWire caught my attention. It seems that a Microsoft, a pioneer in speech recognition, reached a record error rate. In one year, this rate has fallen from 5.9% to 5.1%. It seems impressive. IBM has announced an improvement of their speech recognition engine, too, down from 6.9% to 5.5%. Alexa from Amazon is also improving. Siri from Apple gets better than ever. The same for Google. Competition is healthy because it drives innovation and paves the way to breakthroughs. Yet, today, everyone is using the same magic. Could it be the wrong magic ?


An artificial neural network

The magic under the hood

Today, some, if not all of the speech engines use what is called a neural network.  Basically, the machine tries to imitate the human brain. And the misconception in neural networks is the following: there are 100 billion neurons in the human brain, each with 100 to 10000 connections. Those connections are extremely important to the human intelligence. So, by the numbers,  there are between 10 trillion and  1 quadrillion connections.

A big number, but after all, just a number. All we need is to get to have 1 quadrillion processors or something equivalent and the system will be as smart as a human. Well, something has been omitted here. Yes, there are so many connections in the human brain, but the part that is considered intelligence has much less ‘smart material’.  If the human brain is a ball of 16 centimeters in diameter, the ‘intelligent’ part of it is an outer layer less than  3 mm thick. It the cerebral cortex. The rest of the brain is the animal part. Somehow, the intelligent layer of the brain has a quality that makes us smarter.


Only 5% of the words matter here


The real challenge

IBM claims that one word out of 20 is missed by a human listener. While I don’t agree with the claim, one fact is sure: people speak differently:

  • different speeds;
  • different volumes;
  • different vocabularies;
  • different pronunciations

and so on. All these differences adds up to the challenge of understanding speech. The English language  has about 1 million words. 5% of a million is 50000 words.  As many as the common vocabulary of a common speaker.  Imagine 20 people in the countryside. Only one of them knows how to get to the castle of the king, 19 others leaking information that misleads. According to the current state of the art, no speech recognition can guarantee to bring you to the king. And if such a system were to be part of a self-driving car, well, I don’t even try to imagine.


The true challenge is to get to Six-Sigma

The true breakthrough

A good speech system should be much more close to Six-Sigma and the reason why is that is should be able to infer what word it missed, make correct guess and ask clarifying questions. For those who are not aware, Six-Sigma is about 3.4 errors in a million.

Don’t misunderstand me. 5% is a great improvement. I remember when 20 years ago I used Microsoft’s experimental speech recognition system and each time I spoke ‘iexplore’ it understood ‘Netscape’. Yes, such was the case.  Today it has changed, but 5% is not good enough for me.  Not if I want to put the system in a place where people’s life depends on it.

The potential of IoT

While I am still a bit skeptical, there is a huge potential for speech recognition. By embedding Alexa or Siri into a small device like a temperature controller, or a water tap controller,  we could interact in a more humane way with our environment. so there is hope. A new hope.

So keep working Microsoft, IBM, Amazon, Google and all other teams. The road is not a pleasant walk, but by the end of it, there such a big reward …


Smart Home Summit – 2017 edition

On 19th and 20th September, London will host the Smart Home Summit 2017.  While IoT is a hot topic, it is at such conferences that the real actors come. The agenda looks impressive. The speakers too.  The list includes the usual suspects and newcomers, among which:






A Brief History of Things

Related image2001

History repeats itself. Or so it seems.  Little more than a decade and a half ago, it was enough to use the words internet, web, new technology and you had the financing for your startup. What happened we all know: the dot-com bubble, the crisis, the collapse of the net-economy.

Image result for credit2008

Then, the economy started again, stronger, better. Everybody was confident. So confident that elementary auditing techniques were ignored just like that.  What happened is history. A history whose relics are still slowing down the development on steroids of the economy. Because the economy is developing. It might mutate, it might change its appearance, but it is developing. And the history is repeating again and again. Why care. It goes down, it goes up.


We are less than a decade later. If you want to make a start-up and raise a lot of money, what better way than use the magic 3-letter word: Aye Oh Tee. And to make sure people in front of you get it, just add the word Smart in front of any other word:

Image result for smart building

  • Smart Home;Image result for smart home
  • Smart Car;Image result for intelligent vehicles
  • Smart Mobility;Image result for smart mobility
  • Smart City;Image result for smart city

The list goes on and on. The number of small companies whose business is related to IoT is incalculable. OK, the small devices have a future of their own, that is sure. But there is a long way ahead. Yet, people have no patience. Time shrinks itself. Six months are too much for an idea to become reality or else it gets obsolete. Are we nearing a bubble ? If in 2001, Internet was a toy for a small part of the global population, today is different.

Image result for Plug and playIn the old days of DOS and Windows, there was a mantra: Plug-n-Play. A computer device had more chances to get into the market if it was PnP.  I don’t know why, but this PnP thing seems to resurface as IoT.  Which is good for the long run. Future needs to develop itself.  There are many projects waiting to become a reality, a part of the landscape.