read 8 mts
A logo is an important part of a company’s brand, and it makes a significant impact on a company’s public perception & reputation.
People generally tend to identify businesses by looking at its logo, even if their name isn’t part of the logo. Perhaps we will notice that one can correctly guess the nature of a business by looking at its logo for the first time, even if you’ve never heard of the business before. In fact, a logo is one of the most important branding investments a business can make.
Every enterprise tries to bud & shape them into the best form of themselves. In the process of ingraining in a sector, they are represented not only by their professional accomplishments but also by their ‘Brand Logo’. The course of creating a logo for any corporate body requires a lot of effort, as they should not resemble any other brand in one or other sense. It takes a few days to weeks for a designer to create and validate if there is any similarity with the existing logo. So, let us see how we can build a logo relevance mechanism, i.e., when a designer comes up with a logo, check how similar is it to the existing logo(s). So, this project helps the logo-designers to reduce their burden in the process of building an ideal logo for any corporate enterprise.
The agenda of the project is to build the systematic approach of providing relevance for the given logos. One simple idea that we can think of to identify is that, suppose a designer comes up with an idea of a logo for a company. There is a certain probability that an identical idea could have been already implemented. So, if without any further research the company publishes themselves with that logo, fatal situations can happen in the corporate world under the infringement action. One such classic example that happened in 2006 was ‘Starbucks coffee’ who lost a legal battle with ‘Starpreya coffee’ for having a similar logo.
Learn more with our Data Scientists Tarunkumar Wuyyuru and Akhil Reddy Sheri.
- Application Areas
- Logo Generation:
- We can spot for the amazing implementation of this application by generating a new logo with the provided logos. It can be done by performing the ‘Feature Extraction’ on the input logos and generating various features from that extracted content, leading to the generation of a new logo.
- Live Logo Detection:
- Virtual Applications:
- Little tweaks can be made in the application and affixed to a live streaming/recorded video, in which the application can detect the logos present in that video. If in case, the logos present in the video are existing already, then one can name the logo by adding a rectangular box around it.
- We can make sure that modern logos are not similar to the pre-existing logos.
- Physical Applications:
- Can connect a camera to the application such that if the designer draws a logo in a Sheet & click the capture button, then it can process the image and provide the relevance of the logo with the existing ones by implementing the same work-flow.
- On the other hand, we can attach a ‘Digital Graphic Drawing Pad/Tablet’ to the system and draw the logo [which most professionals do]. They can then send that digital logo to the application and get the desired results.
- Solution Architecture Specifics
- Data Gathering:
- For collecting the logos, we performed web scraping in the Google search engine. A python script was executed to achieve this data collection.
- The initial process of data gathering comprised of collecting a hundred images for each logo. i.e. 500 logos with each logo consisting of 100 images.[100 Images x 500 Logos = 50,000 Images].
- After the ‘Root-Cause Analysis’, we had restricted ourselves to 12 images per logo. Therefore, total images being: [12 Images x 500 Logos = 6000 Images].
- So, this was the ‘Final Training-Data’ that we considered for model building.
- Entire ‘Root-Cause Analysis’ is explained in the ‘Pre-Process’ Section.
- Pre-processing Steps [Before ‘Root-Cause Analysis’]
- Before the Root-Cause Analysis, we also performed ‘Image Resizing’ Step as we didn’t find any anomalies in the data.
- We had implemented ‘Label Encoding’ & ‘Categorical Encoding’ for the Target representation. Finally, the target shape is (number of Training Images, Number of Classes)
- But as the results were inadequate, we went for ‘Root-Cause Analysis’.
- Additional Pre-processing Steps [After ‘Root-Cause Analysis’]
- With the collected data, i.e. [100 Images x 500 Logos = 50,000 images] we trained our Convolutional Neural Network Model, but it threw disastrous results. We experimented with a different combination of layers, nodes, Max-pooling, Convolutional strides etc. But only slight improvement could be observed.
- So, we checked the data manually for logos, i.e. we randomly selected a logo and checked the downloaded images. We had identified that the majority of the data is not in the right representation. So, we performed a variety of pre-processing steps in-order to tackle these inconsistency issues.
- Problems in the Data were:
- Corrupted Images:
- Out of the images we have arranged for every logo, a certain portion of them was corrupted images and was not readable by the model.
- Irrelevant/Noise Images:
- There were some insignificant images for every class of the downloaded logos. It was hard to manually check all the noise images in every class and abolish them.
- Deviation in the Image Size:
- Every image was of different size, which added an additional step of including a script for re-sizing the images in our work-flow.
- Old vs New:
- This was a peculiar type of discomfort in this project. As the downloaded images contained the old and current logos of the same company, it made the process a bit tough to generalize well for all sorts of logos.
- Image Shape:
- Major trouble in the implementation which lured a major portion of the time was getting the data into the appropriate shape for the model to process. To train the model on this diverse dataset was another crucial difficulty, as the training took a significant amount of time. All the Images were resized to (50,50,3) dimension, i.e. image was of size 50x50and its channel was 3 [as it was a coloured image].
- As there was a prominent amount of junk in the input/web-scraped data, after conducting ‘Root-Case Analysis’ steps & dealing with all the pre-processing steps, we have restricted to only 12 images per logo as they were cleaner & subtle for the model to process.
- Deployment elements consist of:
- Flask (Web Framework)
- HTML (creating web pages)
- Saved Model weights (For predictions/reusing for another application)
*[Deployment Flow is provided in an image below] *
This is the sketch of ‘Solution Architecture’ of the project. Now we will deep dive into the technicalities which were presented in the next section.
- Complete Code & Implementation Details
As we all know, data is the fuel for machine learning. Here we require the logos of existing companies to go further. So, let us gather the data.
We have openly available logo datasets online, which we can use. You can download them from here.
Another option is to manually download the logos of different companies like this.
But it is technically impossible to download all the logos in such a fashion. And we also don’t know how many companies are present on the globe.
So, let us select only Forbes 300/Fortune 500 companies for model building. I selected these companies because they have extreme legal proceedings if we copy their logo. So, I selected 300 GLOBAL 2000: THE WORLD’S LARGEST PUBLIC COMPANIES from Forbes website. And Fortune 500 companies from their website. The list may change if you are visiting the website today.
Downloading the logos for each company: As I mentioned earlier, I can’t visit Google Images and download logos for above 500 companies. So, I decided to use the already existing Python project from here to download/scrape images from Google Images. This is how you download the images using this python repository.
Just run the code python google-images-download.py –keywords “INSOFE Logo, Microsoft Logo, Accenture Logo, ……………..” –limit 20 in the terminal to download your required logos.
This will download the images in the following way.
It took around 22 hours to download 100 logos each for 500 companies (50,000 images).
We did the following preprocessing steps so that we can train on them.
- As the images were scraped, there were some corrupted images. So, we have performed Subsetting12 images for each brand.
- Resized all images to size (50,50,3).
- Created a numpy array and combined all the train images.
- Numpy array is of size (no_of_train_images, 50, 50, 3).
- Similarly created target variable which is of size (no_of_train_images).
- Then we had to Label Encode and Categorical Encode the target variable so that we can apply Neural network algorithms on it. ( Below is the example of how we do it).
In the above image, 0,1,2,3 are Encoded_Labels and the right side represents the categorical encoding of the Enocded_Labels. You can read about it in more detail here.
We used the CNN model to build classification on the above 500 labels. Our CNN model contains 2 Dense, 2 Convolution, 1 MaxPooling and 2 Dropout Layers. However, we can play with the layers based on our requirement. The code to the above CNN model is present in this GitHub account.
Run the python code app.py (python app.py) in the terminal, open the browser and navigate to 127.0.0.1:5000 or localhost:5000 or yourlocalIP:5000
Choose the logo for which you want to find the relevance and click on predict to find the relevance.