Search for:
what is data literacy and how is it playing a vital role in todays world
What is Data Literacy and How is it Playing a Vital Role in Today’s World?
what is data literacy and how is it playing a vital role in todays world

What literacy was for the past century is what data literacy is for the twenty-first century. Most employers now prefer people with demonstrated data abilities over those with higher education, even data science degrees. According to the report, only 21% of businesses in the United States consider a degree when hiring for any position, compared to 64% who look for applicants who can demonstrate their data skills. When data is viewed as a company’s backbone, it’s critical that corporations assist their staff in properly utilizing data.

What is Data Literacy?
The capacity to understand, work with, analyze, and communicate with data is known as data literacy. It’s a talent that requires workers at all levels to ask the right questions of data and machines, creates knowledge, make decisions, and communicate meaning to others. It isn’t only about comprehending data. To be educated, you must also have the confidence to challenge evidence that isn’t performing as it should. Literacy aids the analysis process by allowing for the human element of critique to be considered. Not only for data and analytics professions, but in all occupations, organizations are looking for data literacy. Companies that rigorously invest in data literacy programs will outdo those that don’t.

Why is it Important?
There are various components to achieving data literacy. Tools and technology are important, but employees must also learn how to think about data to understand when it is valuable and when it is not. When employees interact with data, they should be able to view it, manipulate it, and share the results with their colleagues. Many people go to Excel because it is a familiar tool, but confining data to a desktop application is restrictive and leads to inconsistencies. Employees receive conflicting results even though they are looking at the same statistics because information becomes outdated. It’s beneficial to have a single platform for viewing, analyzing, and sharing data. It provides a single source of truth, ensuring that everyone has access to the most up-to-date information. When data is kept and managed centrally, it is also much easier to implement security and governance regulations. Another vital aspect of data culture is having excellent analytical, statistical, and data visualization capabilities. Complex data may be made easy using data visualization, and simple humans can drill through data to find answers to queries.

Should Everyone be Data Literate?
A prevalent misconception regarding data literacy is that only data scientists should devote time to it; instead, these skills should be developed by all employees. According to a Gartner Annual Chief Data Officer (CDO) Survey, poor data literacy is one of the main roadblocks to the CDO’s success and a company’s ability to grow. To combat this, 80% of organizations will have specific initiatives to overcome their employees’ data deficiencies by 2020, Gartner predicts. Companies with teams that are literate in data and its methodologies can keep up with new trends and technologies, stay relevant, and leverage this skill as a competitive advantage, in addition to financial benefits.

How to Build Data Literacy
1. Determine your company’s existing data literacy level.
Determine your organization’s current data literacy. Is it possible for your managers to propose new projects based on data? How many individuals nowadays genuinely make decisions based on data?

2. Identify data speakers who are fluent in the language and data gaps.
You’ll need “translators” who can bridge the gap and mediate between data analysts and business groups, in addition to data analysts who can speak naturally about data. Identify any communication barriers that are preventing data from being used to its full potential in the business.

3. Explain why data literacy is so important.
Those who grasp the “why” behind efforts are more willing to support the necessary data literacy training. Make sure to explain why data literacy is so important to your company’s success.

4. Ensure data accessibility.
It’s critical to have a system in place that allows everyone to access, manipulate, analyze, and exchange data. This stage may entail locating technology, such as a data visualization or management dashboard, that will make this process easier.

5. Begin small when developing a data literacy program.
Don’t go overboard by conducting a data literacy program for everyone at the same time. Begin with one business unit at a time, using data to identify “lost opportunities.” What you learn from your pilot program can be used to improve the program in the future. Make your data literacy workshop enjoyable and engaging. Also, don’t forget that data training doesn’t have to be tedious!

6. Set a good example.
Leaders in your organization should make data insights a priority in their own work to demonstrate to the rest of the organization how important it is for your team to use data to make decisions and support everyday operations. Insist that any new product or service proposals be accompanied by relevant data and analytics to back up their claims. This reliance on data will eventually result in a data-first culture.

So, how is your organization approaching data literacy? Is it one of the strategic priorities? Is there a plan to get a Chief Data Officer? Feel free to share your thoughts in the comments section below.

Source Prolead brokers usa

how ai benefits ehr systems
How AI Benefits EHR Systems

how ai benefits ehr systems

As AI continues to make waves across the medical ecosystem, its foray into the world of EHR has been interesting. This is obviously because of the countless benefits both systems offer. Now, imagine you use a basic EHR for patients. One patient is administered an MRI contrast agent before the scan. What you may not know is that they are prone to an allergy or conditions that could cause the dye to negatively affect the patient. Perhaps the data was in the patient’s EHR but was buried so deep that it would have been impossible to look for it specifically.

An AI-enabled EMR, on the other hand, would have been able to analyze all records and determine if there was a possibility of any conditions that may render the patient susceptible to adverse reactions and alert the lab before any such dyes are administered.

Here are other benefits of AI-based EHR to help you understand how they contribute to the sector.

  1. Better diagnosis: Maintaining extensive records is extremely helpful for making a better, more informed diagnosis. However, with AI in the mix, the solution can then identify even the most nominal changes in health stats to help doctors confirm or disprove it. Furthermore, such systems can also alert doctors about any anomalies and straight away link them to reports and conclusions submitted by doctors, ER staff, etc.
  2. Predictive analytics: Some of the most important benefits of AI-enabled EHRs is that they can analyze health conditions, flag any risk factors and automatically schedule appointments. Such solutions are also able to help doctors corroborate and correlate test results and help set up treatment plans or further medical investigations to deliver better and more robust conclusions about patients’ well-being.
  3. Condition mapping: Countless pre-existing conditions may impede medical diagnosis and procedures challenging or even dangerous. This can be easily tended to by AI-enabled EHRs that can help doctors rule out any such possibilities based on factual information.

Now, let’s look at some of its challenges.

  1. Real-time access: For data to be accessible by AI, the vast amounts of data generated by a hospital daily are stored in proper data centers.
  2. Data sharing: Of course, the entire point of EHRs is to make data accessible. Unfortunately, that isn’t exactly possible until you have taken care of the storage and that it is in the requisite formats. Unprocessed data is not impossible for AI to sift through but it does count up as a completely different task — one that takes a toll on the time taken to execute AI’s other, more important objectives in this context.
  3. Interoperability of data: It is not enough to just be able to store data; the said data must be also readable across a variety of devices and formats.

Artificial intelligence has a lot to offer when it comes to electronic health records and the healthcare sector in general. If you too want to put this technology to work for you, we recommend looking up a trusted custom EHR system development service provider right away and get started on the development project ASAP.

Source Prolead brokers usa

how to scrape amazon product data
How To Scrape Amazon Product Data

how to scrape amazon product data

Amazon, as the largest e-commerce corporation in the United States, offers the widest range of products in the world. Their product data can be useful in a variety of ways, and you can easily extract this data with web scraping. This guide will help you develop your approach for extracting product and pricing information from Amazon, and you’ll better understand how to use web scraping tools and tricks to efficiently gather the data you need.

The Benefits of Scraping Amazon

Web scraping Amazon data helps you concentrate on competitor price research, real-time cost monitoring and seasonal shifts in order to provide consumers with better product offers. Web scraping allows you to extract relevant data from the Amazon website and save it in a spreadsheet or JSON format. You can even automate the process to update the data on a regular weekly or monthly basis.

There is currently no way to simply export product data from Amazon to a spreadsheet. Whether it’s for competitor testing, comparison shopping, creating an API for your app project or any other business need we’ve got you covered. This problem is easily solved with web scraping.

Here are some other specific benefits of using a web scraper for Amazon:

  • Utilize details from product search results to improve your Amazon SEO status or Amazon marketing campaigns
  • Compare and contrast your offering with that of your competitors
  • Use review data for review management and product optimization for retailers or manufacturers
  • Discover the products that are trending and look up the top-selling product lists for a group

Scraping Amazon is an intriguing business today, with a large number of companies offering goods, price, analysis, and other types of monitoring solutions specifically for Amazon. Attempting to scrape Amazon data on a wide scale, however, is a difficult process that often gets blocked by their anti-scraping technology. It’s no easy task to scrape such a giant site when you’re a beginner, so this step-by-step guide should help you scrape Amazon data, especially when you’re using Python Scrapy and Scraper API.

First, Decide On Your Web Scraping Approach

One method for scraping data from Amazon is to crawl each keyword’s category or shelf list, then request the product page for each one before moving on to the next. This is best for smaller scale, less-repetitive scraping. Another option is to create a database of products you want to track by having a list of products or ASINs (unique product identifiers), then have your Amazon web scraper scrape each of these individual pages every day/week/etc. This is the most common method among scrapers who track products for themselves or as a service.

Scrape Data From Amazon Using Scraper API with Python Scrapy 

Scraper API allows you to scrape the most challenging websites like Amazon at scale for a fraction of the cost of using residential proxies. We designed anti-bot bypasses right into the API, and you can access  additional features like IP geotargeting (&country code=us) for over 50 countries, JavaScript rendering (&render=true), JSON parsing (&autoparse=true) and more by simply adding extra parameters to your API requests. Send your requests to our single API endpoint or proxy port, and we’ll provide a successful HTML response.

Start Scraping with Scrapy

Scrapy is a web crawling and data extraction platform that can be used for a variety of applications such as data mining, information retrieval and historical archiving. Since Scrapy is written in the Python programming language, you’ll need to install Python before you can use pip (a python manager tool). 

To install Scrapy using pip, run:

pip install scrapy

Then go to the folder where your project is saved (Scrapy automatically creates a web scraping project folder for you) and run the “startproject” command along with the project name, “amazon_scraper”. Scrapy will construct a web scraping project folder for you, with everything already set up:

scrapy startproject amazon_scraper

The result should look like this:

├── scrapy.cfg                # deploy configuration file
└── tutorial                  # project's Python module, you'll import your code from here
    ├── __init__.py
    ├── items.py              # project items definition file
    ├── middlewares.py        # project middlewares file
    ├── pipelines.py          # project pipeline file
    ├── settings.py           # project settings file
    └── spiders               # a directory where spiders are located
        ├── __init__.py
        └── amazon.py        # spider we just created


Scrapy creates all of the files you’ll need, and each file serves a particular purpose:

  1. Items.py – Can be used to build your base dictionary, which you can then import into the spider.
  2. Settings.py – All of your request settings, pipeline, and middleware activation happens in settings.py. You can adjust the delays, concurrency, and several other parameters here.
  3. Pipelines.py – The item yielded by the spider is transferred to Pipelines.py, which is mainly used to clean the text and bind to databases (Excel, SQL, etc).
  4. Middlewares.py – When you want to change how the request is made and scrapy manages the answer, Middlewares.py comes in handy.

Create an Amazon Spider

You’ve established the project’s overall structure, so now you’re ready to start working on the spiders that will do the scraping. Scrapy has a variety of spider species, but we’ll focus on the most popular one, the Generic Spider, in this tutorial.

Simply run the “genspider” command to make a new spider:

# syntax is --> scrapy genspider name_of_spider website.com 
scrapy genspider amazon amazon.com

Scrapy now creates a new file with a spider template, and you’ll gain a new file called “amazon.py” in the spiders folder. Your code should look like the following:

import scrapy
class AmazonSpider(scrapy.Spider):
    name = 'amazon'
    allowed_domains = ['amazon.com']
    start_urls = ['http://www.amazon.com/']
    def parse(self, response):
        pass

Delete the default code (allowed domains, start urls, and the parse function) and replace it with your own, which should include these four functions:

  1. start_requests — sends an Amazon search query with a specific keyword.
  2. parse_keyword_response — extracts the ASIN value for each product returned in an Amazon keyword query, then sends a new request to Amazon for the product listing. It will also go to the next page and do the same thing.
  3. parse_product_page — extracts all of the desired data from the product page.
  4. get_url — sends the request to the Scraper API, which will return an HTML response.

Send a Search Query to Amazon

You can now scrape Amazon for a particular keyword using the following steps, with an Amazon spider and Scraper API as the proxy solution. This will allow you to scrape all of the key details from the product page and extract each product’s ASIN. All pages returned by the keyword query will be parsed by the spider. Try using these fields for the spider to scrape from the Amazon product page:

  • ASIN
  • Product name
  • Price
  • Product description
  • Image URL
  • Available sizes and colors
  • Customer ratings
  • Number of reviews
  • Seller ranking

The first step is to create start_requests, a function that sends Amazon search requests containing our keywords. Outside of AmazonSpider, you can easily identify a list variable using our search keywords. Input the keywords you want to search for in Amazon into your script:

queries = [‘tshirt for men’, ‘tshirt for women’]

Inside the AmazonSpider, you cas build your start_requests feature, which will submit the requests to Amazon. Submit a search query “k=SEARCH KEYWORD” to access Amazon’s search features via a URL:

It looks like this when we use it in the start_requests function:

## amazon.py
queries = ['tshirt for men', ‘tshirt for women’]
class AmazonSpider(scrapy.Spider):
    def start_requests(self):
        for query in queries:
            url = 'https://www.amazon.com/s?' + urlencode({'k': query})
            yield scrapy.Request(url=url, callback=self.parse_keyword_response)

You will urlencode each query in your queries list so that it is secure to use as a query string in a URL, and then use scrapy.Request to request that URL.

Use yield instead of return since Scrapy is asynchronous, so the functions can either return a request or a completed dictionary. If a new request is received, the callback method is invoked. If an object is yielded, it will be sent to the data cleaning pipeline. The parse_keyword_response callback function will then extract the ASIN for each product when scrapy.Request activates it.

How to Scrape Amazon Products

One of the most popular methods to scrape Amazon includes extracting data from a product listing page. Using an Amazon product page ASIN ID is the simplest and most common way to retrieve this data. Every product on Amazon has an ASIN, which is a unique identifier. We may use this ID in our URLs to get the product page for any Amazon product, such as the following:

Using Scrapy’s built-in XPath selector extractor methods, we can extract the ASIN value from the product listing tab. You can build an XPath selector in Scrapy Shell that captures the ASIN value for each product on the product listing page and generates a url for each product:

products = response.xpath('//*[@data-asin]')
        for product in products:
            asin = product.xpath('@data-asin').extract_first()
            product_url = f"https://www.amazon.com/dp/{asin}"

The function will then be configured to send a request to this URL and then call the parse_product_page callback function when it receives a response. This request will also include the meta parameter, which is used to move items between functions or edit certain settings.

def parse_keyword_response(self, response):
        products = response.xpath('//*[@data-asin]')
        for product in products:
            asin = product.xpath('@data-asin').extract_first()
            product_url = f"https://www.amazon.com/dp/{asin}"
            yield scrapy.Request(url=product_url, callback=self.parse_product_page, meta={'asin': asin})

Extract Product Data From the Amazon Product Page

After the parse_keyword_response function requests the product pages URL, it transfers the response it receives from Amazon along with the ASIN ID in the meta parameter to the parse product page callback function. We now want to derive the information we need from a product page, such as a product page for a t-shirt.

You need to create XPath selectors to extract each field from the HTML response we get from Amazon:

def parse_product_page(self, response):
        asin = response.meta['asin']
        title = response.xpath('//*[@id="productTitle"]/text()').extract_first()
        image = re.search('"large":"(.*?)"',response.text).groups()[0]
        rating = response.xpath('//*[@id="acrPopover"]/@title').extract_first()
        number_of_reviews = response.xpath('//*[@id="acrCustomerReviewText"]/text()').extract_first()
        bullet_points = response.xpath('//*[@id="feature-bullets"]//li/span/text()').extract()
        seller_rank = response.xpath('//*[text()="Amazon Best Sellers Rank:"]/parent::*//text()[not(parent::style)]').extract()


Try using a regex selector over an XPath selector for scraping the image url if the XPath is extracting the image in base64.

When working with large websites like Amazon that have a variety of product pages, you’ll find that writing a single XPath selector isn’t always enough since it will work on certain pages but not others. To deal with the different page layouts, you’ll need to write several XPath selectors in situations like these. 

When you run into this issue, give the spider three different XPath options:

def parse_product_page(self, response):
        asin = response.meta['asin']
        title = response.xpath('//*[@id="productTitle"]/text()').extract_first()
        image = re.search('"large":"(.*?)"',response.text).groups()[0]
        rating = response.xpath('//*[@id="acrPopover"]/@title').extract_first()
        number_of_reviews = response.xpath('//*[@id="acrCustomerReviewText"]/text()').extract_first()
        bullet_points = response.xpath('//*[@id="feature-bullets"]//li/span/text()').extract()
        seller_rank = response.xpath('//*[text()="Amazon Best Sellers Rank:"]/parent::*//text()[not(parent::style)]').extract()
        price = response.xpath('//*[@id="priceblock_ourprice"]/text()').extract_first()
        if not price:
            price = response.xpath('//*[@data-asin-price]/@data-asin-price').extract_first() or \
                    response.xpath('//*[@id="price_inside_buybox"]/text()').extract_first()


If the spider is unable to locate a price using the first XPath selector, it goes on to the next. If we look at the product page again, we can see that there are different sizes and colors of the product. 

To get this info, we’ll write a fast test to see if this section is on the page, and if it is, we’ll use regex selectors to extract it.

temp = response.xpath('//*[@id="twister"]')
        sizes = []
        colors = []
        if temp:
            s = re.search('"variationValues" : ({.*})', response.text).groups()[0]
            json_acceptable = s.replace("'", "\"")
            di = json.loads(json_acceptable)
            sizes = di.get('size_name', [])
            colors = di.get('color_name', [])

When all of the pieces are in place, the parse_product_page function will return a JSON object, which will be sent to the pipelines.py file for data cleaning:

def parse_product_page(self, response):
        asin = response.meta['asin']
        title = response.xpath('//*[@id="productTitle"]/text()').extract_first()
        image = re.search('"large":"(.*?)"',response.text).groups()[0]
        rating = response.xpath('//*[@id="acrPopover"]/@title').extract_first()
        number_of_reviews = response.xpath('//*[@id="acrCustomerReviewText"]/text()').extract_first()
        price = response.xpath('//*[@id="priceblock_ourprice"]/text()').extract_first()
        if not price:
            price = response.xpath('//*[@data-asin-price]/@data-asin-price').extract_first() or \
                    response.xpath('//*[@id="price_inside_buybox"]/text()').extract_first()
        temp = response.xpath('//*[@id="twister"]')
        sizes = []
        colors = []
        if temp:
            s = re.search('"variationValues" : ({.*})', response.text).groups()[0]
            json_acceptable = s.replace("'", "\"")
            di = json.loads(json_acceptable)
            sizes = di.get('size_name', [])
            colors = di.get('color_name', [])
        bullet_points = response.xpath('//*[@id="feature-bullets"]//li/span/text()').extract()
        seller_rank = response.xpath('//*[text()="Amazon Best Sellers Rank:"]/parent::*//text()[not(parent::style)]').extract()
        yield {'asin': asin, 'Title': title, 'MainImage': image, 'Rating': rating, 'NumberOfReviews': number_of_reviews,
               'Price': price, 'AvailableSizes': sizes, 'AvailableColors': colors, 'BulletPoints': bullet_points,
               'SellerRank': seller_rank}

How To Scrape Every Amazon Product on Amazon Product Pages

Our spider can now search Amazon using the keyword we provide and scrape the product information it returns on the website. What if, on the other hand, we want our spider to go through each page and scrape the items on each one?

To accomplish this, we simply need to add a few lines of code to our parse_keyword_response function:

def parse_keyword_response(self, response):
        products = response.xpath('//*[@data-asin]')
        for product in products:
            asin = product.xpath('@data-asin').extract_first()
            product_url = f"https://www.amazon.com/dp/{asin}"
            yield scrapy.Request(url=product_url, callback=self.parse_product_page, meta={'asin': asin})
        next_page = response.xpath('//li[@class="a-last"]/a/@href').extract_first()
        if next_page:
            url = urljoin("https://www.amazon.com",next_page)
            yield scrapy.Request(url=product_url, callback=self.parse_keyword_response)

After scraping all of the product pages on the first page, the spider would look to see if there is a next page button. If there is, the url extension will be retrieved and a new URL for the next page will be generated. For Example:

It will then use the callback to restart the parse keyword response function and extract the ASIN IDs for each product as well as all of the product data as before.

Test Your Spider

Once you’ve developed your spider, you can now test it with the built-in Scrapy CSV exporter:

scrapy crawl amazon -o test.csv

You may notice that there are two issues:

  1. The text is sloppy and some values appear to be in lists.
  2. You’re retrieving 429 responses from Amazon, and therefore Amazon detects that your requests are coming from a bot so Amazon is blocking the spider.

If Amazon detects a bot, it’s likely that Amazon will ban your IP address and you won’t have the ability to scrape Amazon. In order to solve this issue, you need a large proxy pool and you also need to rotate the proxies and headers for every request. Luckily, Scraper API can help eliminate this hassle.

Connect Your Proxies with Scraper API to Scrape Amazon

Scraper API is a proxy API designed to make web scraping proxies easier to use. Instead of discovering and creating your own proxy infrastructure to rotate proxies and headers for each request, or detecting bans and bypassing anti-bots, you can simply send the URL you want to scrape to the Scraper API. Scraper API will take care of all of your proxy needs and ensure that your spider works in order to successfully scrape Amazon.

Scraper API must be integrated with your spider, and there are three ways to do so: 

  1. Via a single API endpoint
  2. Scraper API Python SDK
  3. Scraper API proxy port

If you integrate the API by configuring your spider to send all of your requests to their API endpoint, you just need to build a simple function that sends a GET request to Scraper API with the URL we want to scrape.

First sign up for Scraper API to receive a free API key that allows you to scrape 1,000 pages per month. Fill in the API_KEY variable with your API key:

API = ‘<YOUR_API_KEY>’
def get_url(url):
    payload = {'api_key': API_KEY, 'url': url}
    proxy_url = 'http://api.scraperapi.com/?' + urlencode(payload)
    return proxy_url

Then, by setting the url parameter in scrapy, we can change our spider functions to use the Scraper API proxy. get_url(url):

def start_requests(self):
       ...
       …
       yield scrapy.Request(url=get_url(url), callback=self.parse_keyword_response)
def parse_keyword_response(self, response):
       ...
       …
      yield scrapy.Request(url=get_url(product_url), callback=self.parse_product_page, meta={'asin': asin})
        ...
       …
       yield scrapy.Request(url=get_url(url), callback=self.parse_keyword_response)

Simply add an extra parameter to the payload to allow geotagging, JS rendering, residential proxies, and other features. We’ll use the Scraper API’s geotargeting function to make Amazon think our requests are coming from the US, because Amazon adjusts the price data and supplier data displayed depending on the country you’re making the request from. To accomplish this, we must add the flag "&country code=us" to the request, which can be accomplished by adding another parameter to the payload variable.

Requests for geotargeting from the United States would look like the following:

def get_url(url):
    payload = {'api_key': API_KEY, 'url': url, 'country_code': 'us'}
    proxy_url = 'http://api.scraperapi.com/?' + urlencode(payload)
    return proxy_url

Then, based on the concurrency limit of our Scraper API plan, we need to adjust the number of concurrent requests we’re authorized to make in the settings.py file. The number of requests you may make in parallel at any given time is referred to as concurrency. The quicker you can scrape, the more concurrent requests you can produce.

The spider’s maximum concurrency is set to 5 concurrent requests by default, as this is the maximum concurrency permitted on Scraper API’s free plan. If your plan allows you to scrape with higher concurrency, then be sure to increase the maximum concurrency in settings.py.

Set RETRY_TIMES to 5 to tell Scrapy to retry any failed requests, and make sure DOWNLOAD_DELAY and RANDOMIZE_DOWNLOAD_DELAY aren’t allowed because they reduce concurrency and aren’t required with the Scraper API.

## settings.py
CONCURRENT_REQUESTS = 5
RETRY_TIMES = 5
# DOWNLOAD_DELAY
# RANDOMIZE_DOWNLOAD_DELAY

Don’t Forget to Clean Up Your Data With Pipelines
As a final step, clean up the data using the pipelines.py file when the text is a mess and some of the values appear as lists.

class TutorialPipeline:
    def process_item(self, item, spider):
        for k, v in item.items():
            if not v:
                item[k] = ''  # replace empty list or None with empty string
                continue
            if k == 'Title':
                item[k] = v.strip()
            elif k == 'Rating':
                item[k] = v.replace(' out of 5 stars', '')
            elif k == 'AvailableSizes' or k == 'AvailableColors':
                item[k] = ", ".join(v)
            elif k == 'BulletPoints':
                item[k] = ", ".join([i.strip() for i in v if i.strip()])
            elif k == 'SellerRank':
                item[k] = " ".join([i.strip() for i in v if i.strip()])
        return item

The item is transferred to the pipeline for cleaning after the spider has yielded a JSON object. We need to add the pipeline to the settings.py file to make it work:

## settings.py

ITEM_PIPELINES = {'tutorial.pipelines.TutorialPipeline': 300}

Now you’re good to go and you can use the following command to run the spider and save the result to a csv file:

scrapy crawl amazon -o test.csv

How to Scrape Other Popular Amazon Pages

You can modify the language, response encoding and other aspects of the data returned by Amazon by adding extra parameters to these urls. Remember to always ensure that these urls are safely encoded. We already went over the ways to scrape an Amazon product page, but you can also try scraping the search and sellers pages by adding the following modifications to your script.

Search Page

  • To get the search results, simply enter a keyword into the url and safely encode it
  • You may add extra parameters to the search to filter the results by price, brand and other factors.

Sellers Page

  • Instead of a dedicated page showing what other sellers offer a product, Amazon recently updated these pages so that now a component slides in. You must now submit a request to the AJAX endpoint that populates this slide-in in order to scrape this data.
  • You can refine these findings by using additional parameters such as the item’s state, etc.

Forget Headless Browsers and Use the Right Amazon Proxy

99.9% of the time you don’t need to use a headless browser. You can scrape Amazon more quickly, cheaply and reliably if you use standard HTTP requests rather than a headless browser in most cases. If you opt for this, don’t enable JS rendering when using the API. 

Residential Proxies Aren’t Essential

Scraping Amazon at scale can be done without having to resort to residential proxies, so long as you use high quality datacenter IPs and carefully manage the proxy and user agent rotation.

Don’t Forget About Geotargeting

Geotargeting is a must when you’re scraping a site like Amazon. When scraping Amazon, make sure your requests are geotargeted correctly, or Amazon can return incorrect information. 

Previously, you could rely on cookies to geotarget your requests; however, Amazon has improved its detection and blocking of these types of requests. As a result, proxies located in that country must be used to geotarget a particular country. To do this with the scraper API, for example, set country_code=us.

If you want to see results that Amazon would show to a person in the U.S., you’ll need a US proxy, and if you want to see results that Amazon would show to a person in Germany, you’ll need a German proxy. You must use proxies located in that region if you want to accurately geotarget a specific state, city or postcode.

Scraping Amazon doesn’t have to be difficult with this guide, no matter your coding abilities, scraping needs and budget. You will be able to obtain complete data and make good use of it thanks to the numerous scraping tools and tips available.

Source Prolead brokers usa

costs of being an analytics laggardand path to becoming a leader
Costs of Being an Analytics Laggard…And Path to Becoming a Leader

costs of being an analytics laggardand path to becoming a leader

How much money is your organization leaving on the table by not being more effective at leveraging data and analytics to power your business?

This question is becoming more and more relevant for all organizations of all sizes in all industries as AI / ML capabilities become more widely available.  And nothing highlights the costs of not becoming more effective at leveraging data and analytics to power your business models then a recent study by Kearney titled “The impact of analytics in 2020”. 

There are lots of great insights in this report.  One of my favorites is the Analytics Impact Index which shows the “potential percentage profitability gap” of Laggards, Followers, and Explorers vis-à-vis Analytics Leaders (see Figure 1)!

costs of being an analytics laggardand path to becoming a leader

Figure 1:  Analytics Impact Index by Kearney

Figure 1 states that from a potential profitability perspective:

  • Explorers could improve profitability by 20% if they were as effective at Leaders
  • Followers could improve profitability by 55% if they were as effective at Leaders
  • Laggards could improve profitability by 81% if they were as effective at Leaders

Hey folks, this is a critical observation!  The Kearney research puts a potential cost on being an analytics laggard (or follower or explorer), and the money being left on the table is significant.  The Kearney research highlights the business-critical nature of the question:

How effective is your organization at leveraging data and analytics to power your business models?

This is the same question that asked when I released the Big Data Business Model Maturity Index in November 27, 2012. I developed the Big Data Business Model Maturity Index to help organizations understand the realm of the possible for becoming more effective at leveraging data and analytics to power their business models. The Big Data Business Model Maturity Index served two purposes:

  • Provide a benchmark against which clients could contemplate (if not answer) that data and analytics effectiveness question, and
  • Provide a roadmap for becoming more effective at leveraging data and analytics to power their busines models.

I refreshed the Big Data Business Model Maturity Index to reflect the changes in advanced analytics (and the integration of design thinking) since I first created the chart. I’ve renamed the chart “Data & Analytics Business Maturity Index” to reflect that the business challenge is now more focused on the integration of data and analytics (not just Big Data) with the business to deliver measurable, material, and relevant business value (see Figure 2).

costs of being an analytics laggardand path to becoming a leader 1

Figure 2: Data & Analytics Business Maturity Index

Unfortunately, the Kearney research was a little light on explaining the differences between Laggards, Followers, Explorers, and Leaders phases, and in providing a roadmap for navigating from one phase to the next. So, let’s expand on the characteristics of these phases, and provide a roadmap, using my 5-phase of Data & Analytics Business Maturity Index.

To become more effective at leveraging data and analytics to power your business, we need a definition of the 5 phases of the Data & Analytics Business Maturity Index so that you can 1) determine where you sit vis-à-vis best-in-class data and analytics organizations and 2) can determine the realm of what’s possible in leveraging data and analytics to power your business models.

  • Phase 1: Business Monitoring. Business Monitoring is the traditional Business Intelligence phase where organizations are collecting data from their operational systems to create retrospective management reports and operational dashboards that monitor and report on historically what has happened.
  • Phase 2: Business Insights. Business Insights is where organizations are applying data science (machine learning) to the organization’s internal and external data to uncover and codify customer, product, and operational insights (or predicted propensities, patterns, trends, and relationships) for the individualized human (customers, patients, doctors, drivers, operators, technicians, engineers) and/or device (wind turbines, engines, compressors, chillers, switches) that predicts likely outcomes.
  • Phase 3: Business Optimization. Business Optimization is where organizations are operationalizing their customer, product, and operational insights (predicted propensities) to create prescriptive recommendations that seek to optimize key business and operational processes. This includes the creation of “intelligent” apps and “smart” products or spaces and holistic data instrumentation that continuously seeks to optimize operational performance across a diverse set of inter-related use cases.
  • Phase 4: Insights Monetization. Insights Monetization is where organizations are monetizing their customer, product, and operational insights (or predicted propensities) to create new, market-facing monetization streams (such as new markets and audiences, new channels, new products and services, new partners, and new consumption models.
  • Phase 5: Digital Transformation. Digital Transformation is where organizations have created a continuously learning and adapting culture, both AI‐driven and human‐empowered, that seeks to optimize AI-Human interactions to identify, codify, and operationalize actionable customer, product, and operational insights to optimize operational efficiency, reinvent value creation processes, mitigate new operational and compliance risk, and continuously create new revenue opportunities.

Note #1:  Phase 4 this is NOT “Data Monetization” (which infers a focus on selling one’s data). Instead, Phase 4 is titled “Insights Monetization” which is where organizations are focused on exploiting the unique economic characteristics of data and analytics to derive and drive new sources of customer, product, and operational value.

Note #2:  I am contemplating changing Phase 5 from Digital Transformation to Cultural Transformation or Cultural Empowerment for two reasons. 

  • First, too many folks confuse digitalization, which is the conversion of analog tasks into digita…. For example, digitalization is replacing human meter readers, who manually record home electricity consumption data monthly, with internet-enabled meter readers that send a continuous stream of electricity consumption data to the utility company.
  • Second, it isn’t just technology that causes transformation. We just saw how the COVID19 pandemic caused massive organizational transformation.  Yes, transformations can be forced upon us by new technologies, but transformations can also be caused by pandemics, massive storms, climate change, wars, social and economic unrest, terrorism, and more!

Now that we have defined the characteristics of the 5 phases of the Data & Analytics Business Maturity Index, the next step is to provide a roadmap for how organizations can navigate from one phase to the next. And while Data & Analytics Business Maturity Index in Figure 3 is sort of an eye chart, it is critical to understand the foundational characteristics of each phase in advancing to the next phase.

Figure 3:  Data & Analytics Business Maturity Index Roadmap (version 2.0)

What I found interesting in Figure 3 is how the importance of Data Management and Analytic Capabilities – which are critical in the early phases of the Data & Analytics Maturity Index – are replaced in importance by Business Alignment (think Data Monetization) and Culture (think Empowerment).  I think this happens for several reasons:

  • Organizations build out their data and analytic capabilities in the phases. And if organizations are properly curating their data assets (think data engineering, DataOps, and data lake as collaborative value creation platform) and engineering composable, reusable, continuously-learning analytics assets, then the data and analytics can be used across an unlimited number of use cases at near-zero marginal cost (see my Economics of Data and Analytics research captured in my new book “The Economics of Data, Analytics, and Digital Transformation”).  Yes, once you have curated your data and engineered your analytics properly, then the need to add new data sources and build new analytic assets declines in importance as the organization matures!
  • The Insights Monetization phase requires business leadership to envision (using design thinking) how the organization can leverage their wealth of customer, product, and operational insights (predicted propensities) to create new monetization opportunities including new markets and audiences, new products and services, new channels and partnerships, new consumption models, etc.
  • Finally, to fully enable and exploit the AI-Human engagement model (that defines the Digital Transformation phase) requires the transformation of the organizational culture by empowering both individuals and teams (think Teams of Teams) with the capabilities and confidence to identify, ideate, test, learn, and create new human and organizational capabilities that can reinvent value creation processes, mitigate new operational and compliance risk, and continuously create new revenue opportunities.

Ultimately, it is Business Alignment (and the ability to monetize insight) and Culture (and the empowerment of individuals and teams to create new sources of value) that separates Laggards, Explorers, and Followers from Leaders.

The Kearney study made it pretty clear what it is costing organizations to be Laggards (as well as Followers and Explorers) in analytics.  It truly is leaving money on the table.

And the Data & Analytics Business Maturity Index provides a benchmark to not only to measure how effective your organization is at leveraging data and analytics to power your business, but also provides a roadmap for how your organization can become more effective.  But the market leading organizations know that becoming more effective at leveraging data and analytics goes well beyond just data and analytics and requires driving close collaboration with the business stakeholders (Insights Monetization) and creating a culture that is prepared for the continuously-learning and adapting AI-Human interface that creates an organization that is prepared for any transformational situation (Digital or Cultural Transformation).

Seems like a pretty straight-forward way to make more money…

Source Prolead brokers usa

using predictive analytics to understand your business future
Using Predictive Analytics to Understand Your Business Future

using predictive analytics to understand your business futureHow accurate is predictive analytics? Is it worth using for my business? How can forecasting and prediction help me in such an uncertain environment? These are all valid questions and they are they are questions your business (and your fellow business owners) must grapple with to understand the value of planning and analytical tools.

While predictive analytics is based on historical data (and, of course, things have certainly changed over time), and you might argue that today is nothing like yesterday, there are many other benefits to the augmented analytics predictive analytics environment.

These tools allow the organization to apply predictive analytics to any use case using forecasting, regression, clustering and other methods to analyze an infinite number of use cases including customer churn, and planning for and target customers for acquisition, identify cross-sales opportunities, optimize pricing and promotional targets and analyze and predict customer preferences and buying behaviors.

All of this sounds good, doesn’t it? But, it still doesn’t address the issue of predicting the future in a changing environment. What good is historical data in today’s chaotic business environment?

Here is just one example of how assisted predictive modeling might help your business and your business users:

While the future is hard to see, your business CAN look for patterns and deviation to understand what HAS changed over the past 60 or 90 days or over the past year.

Users can hypothesize and test theories to see how things will change in market demand and customer response if trends continue, if the economy declines or if there is more disposable income. For a restaurant, that might mean that the recent trend toward takeout and curbside delivery might continue, even after the global pandemic has passed. People have just gotten used to the convenience of take-out meals and how easy it is to have them delivered or to order online.

For employers, the trend toward remote work might continue as businesses look at the cost of rent and utilities for an office or facility and weigh those expenses against the benefits of remote working. Remember, history does not have to be five years ago. It can be as recent as last month! 

That’s where forecasting and planning come in. You can look at what is happening today and see how it has impacted your business and hypothesize about how things will change next month or next year.

If your business sees the value of forecasting, planning and predicting in an uncertain environment and wants to consider predictive analytics, contact us today to get started, and explore our bonus content here.

Source Prolead brokers usa

five graphs to show cause and effect
Five Graphs to Show Cause and Effect
  • five graphs to show cause and effectCause-and-effect relationships can be visualized in many ways.
  • Five different types of graphs explained, from simple to probabilistic.
  • Suggestions for when to use each type.

If your data shows a cause and effect relationship and you want to convey that relationship to others, you have an array of choices. Which particular graph you choose largely depends on what information you’re dealing with. For example, use a scatter plot or cause-and-effect flowchart if you want to show a causal relationship (i.e. one that you know exists) to a general audience. But if you need to graph more technical information, another chart may be more appropriate. For example, time-dependent data that has a causal relationship to data in another time period can be demonstrated with Granger Causality time series.

Contents:

  1. Cause and Effect (Fishbone) Diagram
  2. Scatter Plot
  3. Causal Graph
  4. Cause and Effect Flowchart
  5. Granger-causality Time Series

1. Cause and Effect (Fishbone) Diagram

A cause and effect diagram, also called a “fishbone” or Ishikawa diagram, can help in identifying possible causes of a problem. It’s a discovery tool that can help uncover causal relationships. Use when you want to [1]:

  • Brainstorm potential causes of a problem.
  • Identifying possible causes that might not otherwise be considered.
  • Sort ideas into useful categories.

The problem or effect is shown at the “head” of the fish. Possible causes are listed on the “bones” under various categories.

five graphs to show cause and effect

2. Scatter Plot

Scatter plots are widely available in software, including spreadsheets. They have the distinct advantage that they are easy to create and easy to understand. However, they aren’t suitable for showing every cause-and-effect relationship. Use a scatter plot when you want to:

  • Show a simple association or relationship (e.g. linear, exponential, or sinusoidal) between two variables.
  • Convey information in a simple format to a general audience.

A scatter plot can never prove cause and effect, but they can be an effective way to show a pre-determined causal relationship if you have determined that one exists.

The following scatter plot shows a linear increasing relationship between speed and traffic accidents:

five graphs to show cause and effect 1

Scatter plots can also be useful in showing there isn’t a relationship between factors. For example, this plot shows that there isn’t a relationship between a person’s age and how much fly spray purchased:

five graphs to show cause and effect 2

If you have more that two variables, multiple scatter plots combined on a single page can help convey higher-level structures in data sets [2].

3. Causal Graphs

A causal graph is a concise way to represent assumptions of a causal model. It encodes a causal model in the form of a directed acyclic graph [3]. Vertices show a system’s variable features and edges show direct causal relationships between features [4].

Use when you want to:

  • Show causal relations from A to B within a model.
  • Analyze the relationships between independent variables, dependent variables, and covariates
  • Include a set of causal assumptions related to your data.
  • Show that the joint probability distribution of variables satisfy a causal Markov condition (each variable is conditionally independent with all its nondescendents, given its parents) [5]. 

If you don’t put an arrow between variables in a causal graph, you’re stating those variables are independent of each other. In other words, not putting arrows in is as informative as putting arrows in. For example, the following graph shows that while glass and thorns can cause a flat tire, there’s no relationship between those two factors:

five graphs to show cause and effect 3

4. Cause and Effect Flowchart

A cause and effect flowchart is a simple way to show causation. It can be particularly effective when you want to convey the root causes for a particular problem without any probabilistic components. Use when you want to show which events or conditions that led to a particular effect or situation [6]. For example, the following cause and effect flowchart shows the main causes for declining sales for a hypothetical web-based business: five graphs to show cause and effect 4

5. Granger-causality Time Series

Granger causality is a probabilistic concept of causality that uses the fact that causes must precede their effects in time. A time series is Granger causal for another series if it leads to better predictions for the latter series. 

The following image shows a time series X Granger-causing time series Y; The patterns in X are approximately repeated in Y after some time lag (two examples are indicated with arrows). 

five graphs to show cause and effect 5

Although Granger-causal time series can be an effective way of showing a potential causal relationship in time-dependent data, temporal precedence by itself is not sufficient for establishing cause–effect relationships [7]. In other words, these graphs are ideal for showing relationships that you know exist, but not for proving one event that happening in a certain period of time caused another.

References

Fishbone Diagram: FabianLange at de.wikipedia, GFDL, via Wikimedia Commons

Granger-causality Time Series: BiObserver, a href=”https://creativecommons.org/licenses/by-sa/3.0%3ECC”>https://creativecommons.org/licenses/by-sa/3.0>CC BY-SA 3.0 via Wikimedia Commons

Other images: By Author

[1] How to Use the Fishbone Tool for Cause and Effect Analysis.

[2] Scatter Plot.

[3] Pearl, J. (2009b). Causality: Models, Reasoning, and Inference. 2nd ed. Cambridge: Cambridge University Press.

[4] Having the Right Tool: Causal Graphs in Teaching Research Design

[5] Integrating Multiple Information Resources to Analyze Intrusion Alerts

[6] Cause and Effect Flowchart.

[7] Causal inference with multiple time series: principles and problems

Source Prolead brokers usa

a small step with rpa a giant leap with hyperautomation
A Small Step with RPA, a Giant Leap with Hyperautomation

a small step with rpa a giant leap with hyperautomation

The automation of business processes was already in a state of increasing adoption long before the pandemic began. However, the rapid shift to remote work coupled with COVID-19’s economic impact has kicked automation into high gear. Organizations are seeking to improve efficiency and reduce cost by reducing employees’ time spent on manual tasks.

Robotic Process Automation (RPA) platforms have traditionally driven this transformation, but with the increased demand for a variety of use cases, many organizations are finding RPA tools inadequate. And they’re wanting to add more intelligence to automation. This is where what’s called hyperautomation comes into play.

A primer on hyperautomation

Hyperautomation, predicted by Gartner to be a top strategic technology trend in 2021, is the idea that anything that can be automated in an organization should be. The analyst firm notes that it’s “driven by organizations having legacy business processes that aren’t streamlined, creating immensely expensive and extensive issues for organizations.”

As Gartner noted, “Hyperautomation brings a set of technologies like document ingestion, process mining, business process management, decision modeling, RPA with iPaaS, and AI-based intelligence.” This concept has different names: Gartner refers to it as hyperautomation, Forrester calls it Digital Process Automation (DPA) and IDC calls it Intelligent Process Automation (IPA).

In contrast, RPA is a form of business process automation: software or hardware systems automate repetitive, simple tasks, and these systems work across multiple applications – just as employees do. One of the challenges with RPA, especially in its early days, is that it didn’t scale easily. A 2019 report by Gartner found only 13% of enterprises were able to scale their early RPA initiatives.

Now enterprises have turned their attention to hyperautomation, which is not merely an extension of RPA, which was only the first step in this direction.

The term “technical debt” describes the predicament many organizations eventually find themselves in. It arises from legacy systems, suboptimal processes and bottlenecks, unstructured data residing in silos, lack of centralized data architecture and security gaps. Business processes are usually operational using a patchwork of technologies and are not optimized, coherent, or consistent. Collectively, they hamper operational capabilities and dampen value propositions to customers.

Whereas most tools are created to solve one problem or accomplish one goal, hyperautomation combines multiple technologies to help organizations develop strategic advantages based on the underlining operational environment. It focuses on adding progressive intelligence across workflows rather than traditional automation solutions. Each of the technology components is designed to enhance an organization’s ability to intelligently automate processes.

The growth and business benefits of hyperautomation

Gartner analysts forecast the market for software that enables hyperautomation will reach almost $860 billion by 2025. This is a rapidly growing market – and it’s little wonder, given that hyperautomation empowers organizations to achieve operational excellence and ensure resilience across business processes. What organization wouldn’t want this?

Hyperautomation offers huge potential for several important business benefits, including increased ROI, as well as:

Connecting distributed systems: Hyperautomation enables distributed enterprise software and data storage systems to communicate seamlessly with each other so as to provide deeper analytics into process bottlenecks and to drive efficiencies.

Add decision making to automation: Use AI to automate decision making and take live data-driven decisions just as any human operator would do.

Enabling your workforce: Minimizing time-consuming, repetitive tasks and capturing employees’ knowledge to perform such tasks will enable them to focus more on business-critical activities.

Digital agility: Hyperautomation prompts all technology components to function in tandem with business needs and requirements. And this will help organizations achieve a state of true digital agility and flexibility.

Collaboration: Process operators can intuitively automate cross-functional activities, which involves collaboration among multiple stakeholders; it reduces cycle time and boosts productivity.

 

Transforming your processes

Given the breadth and scope of hyperautomation, it holds the promise of transforming enterprise processes. Despite prevailing thought, it’s not simply an extension of robotic process automation (RPA), though RPA certainly is a step in the right direction. A combination of technologies that come under the umbrella of hyperautomation can be used together to turbocharge enterprise productivity – for example, bringing together document ingestion and AI technologies (like OCR, Computer Vision, NLP, Fuzzy Logic and Machine Learning) in conjunction with business process management can deliver game-changing innovation in enterprise document processing workflows.

It is important to see hyperautomation as a step-by-step enabler and not necessarily a “big bang” project that instantly solves all problems. It takes time, but it is possible to overcome technical debt with the suite of capabilities that hyperautomation offers, bringing with it huge potential for transformation. That’s what makes it a top strategic technology trend.

Source Prolead brokers usa

the most common problems and solutions with your crm data
The Most Common Problems and Solutions with Your CRM Data

the most common problems and solutions with your crm data

Source: istockphoto

Your Customer Relationship Management (CRM) software is the mother source you leverage for effective client communication. It is also the key to achieving personalized sales targeting and planning future marketing campaigns. So, it should come as no surprise that meticulously maintaining the data entering your CRM should be a top business priority.

But that is not always the case, is it? Your CRM is, after all, a dynamic entity overwhelmed by the magnitude of customer data pouring in every second. Can you sort through the volume and mine the data most beneficial to you? Sure, you can, so long you know where to look!

In this post, we bring you the five most common issues, a.k.a., problem areas with your CRM dataset along with their solutions.

1. Incomplete Data Entries

Perhaps the most common problem with your CRM dataset is of shoddily completed data entries. You open your dashboard only to find certain elements of information missing. Incomplete entries can include missing email addresses, incorrect names with missing titles, unlisted phone numbers, and many others problems. Such an erroneous dataset makes it impossible for you to deploy an effective marketing campaign, no matter how great your product.

Solution:

Train your sales and marketing teams or customer service professionals to ask for complete contact information from buyers. If needed, prepare a policy blueprint for these teams on how to collect data from customers. Also, instruct them only to update customers who have provided complete contact information.

Furthermore, integrate purchasing and invoicing data with your CRM so the information flow is complete. When you have data coming from all in-house sources, it can help complete every buyer’s profile. Check your CRM’s configuration to verify if it is capturing all the data from multiple sources.

2. Eroded or Decayed Records

If you want your marketing efforts to die a slow death, send out emails or newsletters using decayed CRM data. Clients who have abandoned their old email addresses or phone numbers fall within this category. If your marketing team is experiencing too many hard bounces, know that your CRM data contains a significant amount of stale information.

Solution:

One way to counter this issue is by confirming an old customer’s contact information whenever you get a chance to come in direct contact with them. For instance, if your customer service team receives an inbound buyer call, quickly reconfirm their contact details.

You can also hire third parties to scrub and cleanse your outdated CRM data. Companies that offer data cleansing services cross-reference your databases with theirs and weed out the decayed information. They also append entries and add new contacts if you need them. Choosing this alternative can quickly get your CRM data in top form. Shown below is the process they employ for thorough data cleansing.

the most common problems and solutions with your crm data

Source: Span Global Services

3. Falling Short on Adequate Leads

Are you marketing to the same buyer base repeatedly? This happens when your CRM data does not have enough new contacts to market to! Ensuring that you have enough leads pouring into your CRM software is a critical challenge countless organizations jostle with.

Solution:

While good CRM software helps you stay connected and nurture current customers, it cannot assist you with scouting fresh prospects. For this, you need to go “all hands on deck” with your sales and marketing teams. Get them to ramp up their efforts to get new prospects into your sales pipeline.

4. Adhering to Data Compliance Norms

Privacy and security concerns around customer data are rapidly evolving, so should your practices! Believe it or not, often buyer data that enters an organization’s CRM application is unethically sourced. Reaching out to such contacts can have serious repercussions that may lead to penal action. If you think data compliance is not one of your worries, know that a study by Dun and Bradstreet reveals that companies listed “protecting data privacy” as their top customer data issue.

Solution:

Start adhering to GDPR norms and educate your employees about them by conducting an extensive workshop, if needed. The EU General Data Protection Regulation (GDPR) is a ready policy that you can apply to your customer data practices. So why reinvent the wheel when you already have one in the form of the GDPR norms?

5. Underutilizing the CRM Software

There is a wealth of features a CRM software comes with that can boost your data utilization. But, countless organizations fail to exploit its full potential. Studies conclude that a whopping 43% of CRM users leverage less than half the features their CRM software offers. How come? Is it because the learning curve is too steep? Were they not given enough time or training to adapt to a new CRM application? It can be a little bit of both. However, in most cases, it is the unawareness of a software’s potential that leaves it underutilized.

Solution:

Get your CRM supplier to deploy a training module or workshop on a rolling basis. Get them under contract to train employees every time a fresh batch rolls in. Provide your CRM users the correct contact details of the trouble-shooters if they need to use a novel feature but cannot understand how to.

The Bottom Line

You’re now well-equipped to approach your CRM data and the software that houses it with a fresh pair of eyes. Armed with the knowledge on issues that hover around CRM data, you can now forge more meaningful relationships with your prospects as well as customers.

Source Prolead brokers usa

face detection explained state of the art methods and best tools
Face Detection Explained: State-of-the-Art Methods and Best Tools

face detection explained state of the art methods and best tools

So many of us have used different Facebook applications to see us aging, turned into rock stars, or applied festive make-up. Such waves of facial transformations are usually accompanied by warnings not to share images of your faces – otherwise, they will be processed and misused.

But how does AI use faces in reality? Let’s discuss state-of-the-art applications for face detection and recognition.

face detection explained state of the art methods and best tools 1

First, detection and recognition are different tasks. Face detection is the crucial part of face recognition determining the number of faces on the picture or video without remembering or storing details. It may define some demographic data like age or gender, but it cannot recognize individuals.

Face recognition identifies a face in a photo or a video image against a pre-existing database of faces. Faces indeed need to be enrolled into the system to create the database of unique facial features. Afterward, the system breaks down a new image intro key features and compares them against the information stored in the database.

First, the computer examines either a photo or a video image and tries to distinguish faces from any other objects in the background. There are methods that a computer can use to achieve this, compensating for illumination, orientation, or camera distance. Yang, Kriegman, and Ahuja presented a classification for face detection methods. These methods are divided into four categories, and the face detection algorithms could belong to two or more groups.

face detection explained state of the art methods and best tools 2

Knowledge-based face detection

This method relies on the set of rules developed by humans according to our knowledge. We know that a face must have a nose, eyes, and mouth within certain distances and positions with each other. The problem with this method is to build an appropriate set of rules. If the rules are too general or too detailed, the system ends up with many false positives. However, it does not work for all skin colors and depends on lighting conditions that can change the exact hue of a person’s skin in the picture.

face detection explained state of the art methods and best tools 3

Template matching

The template matching method uses predefined or parameterized face templates to locate or detect the faces by the correlation between the predefined or deformable templates and input images. The face model can be constructed by edges using the edge detection method. 

face detection explained state of the art methods and best tools 4

A variation of this approach is the controlled background technique. If you are lucky to have a frontal face image and a plain background, you can remove the background, leaving face boundaries. 

For this approach, the software has several classifiers for detecting various types of front-on faces and some for profile faces, such as detectors of eyes, a nose, a mouth, and in some cases, even a whole body. While the approach is easy to implement, it is usually inadequate for face detection.

Feature-based face detection

The feature-based method extracts structural features of the face. It is trained as a classifier and then used to differentiate facial and non-facial regions. One example of this method is color-based face detection that scans colored images or videos for areas with typical skin color and then looks for face segments.

Haar Feature Selection relies on similar properties of human faces to form matches from facial features: location and size of the eye, mouth, bridge of the nose, and the oriented gradients of pixel intensities. There are 38 layers of cascaded classifiers to obtain the total number of 6061 features from each frontal face. You can find some pre-trained classifiers here.

Source

Histogram of Oriented Gradients (HOG) is a feature extractor for object detection. The features extracted are the distribution (histograms) of directions of gradients (oriented gradients) of the image. 

Gradients are typically large round edges and corners and allow us to detect those regions. Instead of considering the pixel intensities, they count the occurrences of gradient vectors to represent the light direction to localize image segments. The method uses overlapping local contrast normalization to improve accuracy.

face detection explained state of the art methods and best tools 5

Appearance-based face detection

The more advanced appearance-based method depends on a set of delegate training face images to find out face models. It relies on machine learning and statistical analysis to find the relevant characteristics of face images and extract features from them. This method unites several algorithms:

Eigenface-based algorithm efficiently represents faces using Principal Component Analysis (PCA). PCA is applied to a set of images to lower the dimension of the dataset, best describing the variance of data. In this method, a face can be modeled as a linear combination of eigenfaces (set of eigenvectors). Face recognition, in this case, is based on the comparing of coefficients of linear representation. 

face detection explained state of the art methods and best tools 6

Distribution-based algorithms like PCA and Fisher’s Discriminant define the subspace representing facial patterns. They usually have a trained classifier that identifies instances of the target pattern class from the background image patterns.

Hidden Markov Model is a standard method for detection tasks. Its states would be the facial features, usually described as strips of pixels. 

Sparse Network of Winnows defines two linear units or target nodes: one for face patterns and the other for non-face patterns.

Naive Bayes Classifiers compute the probability of a face to appear in the picture based on the frequency of occurrence of a series of the pattern over the training images.

Inductive learning uses such  algorithms as  Quinlan’s  C4.5 or Mitchell’s FIND-S to detect faces starting with the most specific hypothesis and generalizing. 

Neural networks, such as GANs, are among the most recent and most powerful methods for detection problems, including face detection, emotion detection, and face recognition.

Video Processing: Motion-based face detection

In video images, you can use movement as a guide. One specific face movement is blinking, so if the software can determine a regular blinking pattern, it determines the face. 

Various other motions indicate that the image may contain a face, such as flared nostrils, raised eyebrows, wrinkled foreheads, and opened mouths. When a face is detected and a particular face model matches with a specific movement, the model is laid over the face, enabling face tracking to pick up further face movements. The state-of-the-art solutions usually combine several methods, extracting features, for example, to be used in machine learning or deep learning algorithms.

Face detection tools

There are dozens of face detection solutions, both proprietary and open-source, that offer various features, from simple face detection to emotion detection and face recognition.

Proprietary face detection software

Amazon Rekognition is based on deep learning and is fully integrated into the Amazon Web Service ecosystem. It is a robust solution both for face detection and recognition, and it is applicable to detect eight basic emotions like “happy”, “sad”, “angry”, etc.  Meanwhile, you can determine up to 100 faces in a single image with this tool. There is an option for video, and the pricing is different for different kinds of usage. 

Face++ is a face analysis cloud service that also has an offline SDK for iOS & Android. You can perform an unlimited amount of requests, but just three per second. It also supports Python, PHP, Java, Javascript, C++, Ruby, iOS, Matlab, providing services like gender and emotion recognition, age estimation, and landmark detection. 

They primarily operate in China, are exceptionally well funded, and are known for their inclusion in Lenovo products. However, bear in mind that its parent company, Megvii has been sanctioned by the US government in late 2019.

Face Recognition and Face Detection API (Lambda Labs) provides face recognition, facial detection, eye position, nose position, mouth position, and gender classification. It offers 1000 free requests per month.

Kairos offers a variety of image recognition solutions. Their API endpoints include identifying gender, age, facial recognition, and emotional depth in photo and video. They offer 14 days free trial with a maximum limit of 10000 requests, providing SDKs for PHP, JS, .Net, and Python.

Microsoft Azure Cognitive Services Face API allows you to make 30000 requests per month, 20 requests per minute on a free basis. For paid requests, the price depends on the number of recognitions per month, starting from $1 per 1000 recognitions. Features include age estimation, gender and emotion recognition, landmark detection. SDKs support Go, Python, Java, .Net, andNode.js.

Paravision is a face recognition company for enterprises providing self-hosted solutions. Face and activity recognition and COVID-19 solutions (face recognition with masks, integration with thermal detection, etc.) are among their services. The company has SDKs for C++ and Python.

Trueface is also serving enterprises, providing features like gender recognition, age estimation, and landmark detection as a self-hosted solution. 

Open-source face detection solutions

Ageitgey/face_recognition is a GitHub repository with 40k stars, one of the most extensive face recognition libraries. The contributors also claim it to be the “simplest facial recognition API for Python and the command line.” However, their drawbacks are the latest release as late as 2018 and 99.38% model recognition accuracy, which could be much better in 2021. It also does not have REST API.

Deepface is a framework for Python with 1,5k stars on GitHub, providing facial attribute analysis like age, gender, race, and emotion. It also provides REST API. 

FaceNet developed by Google uses Python library for implementation. The repository boasts of 11,8k starts. Meanwhile, the last significant updates were in 2018. The accuracy of recognition is 99,65%, and it does not have REST API. 

InsightFace is another Python library with 9,2k stars in GitHub, and the repository is actively updating. The recognition accuracy is 99,86%. They claim to provide a variety of algorithms for face detection, recognition, and alignment. 

InsightFace-REST  is an actively updating repository that “aims to provide convenient, easy deployable and scalable REST API for InsightFace face detection and recognition pipeline using FastAPI for serving and NVIDIA TensorRT for optimized inference.”

OpenCV isn’t an API, but it is a valuable tool with over 3,000 optimized computer vision algorithms. It offers many options for developers, including Eigenfacerecognizer, LBPHFacerecognizer, or lpbhfacerecognition face recognition modules.

OpenFace is a Python and Torch implementation of face recognition with deep neural networks. It rests on the CVPR 2015 paper FaceNet: A Unified Embedding for Face Recognition and Clustering

Bottom line

Face detection is the first step for further face analysis, including recognition, emotion detection, or face generation. However, it is crucial to all other actions to collect all the necessary data for further processing. Robust face detection is a prerequisite for sophisticated recognition, tracking, and analytics tools and the cornerstone of computer vision.

Originally posted on SciForce blog.

Source Prolead brokers usa

nodejs vs java which backend language wins the battle
NodeJS vs. JAVA: Which Backend Language Wins The Battle?

nodejs vs java which backend language wins the battle

In the history of computing, one revolutionary year was 1995, when JAVA made it to the headlines. Closely following its heels, JavaScript appeared in the same year. Both backend technologies sound more like twins name-wise, however, nothing is as distinct as JAVA and JavaScript. Despite their technical differences, they have somehow managed to collide as tech stacks, all thanks to NodeJS. 

Today, enterprises of all scales and sizes prefer to hire NodeJS developers for their projects. You don’t believe me? Hear this out. Netflix recently migrated from JAVA to NodeJS. 

But Why? 

Because NodeJS is not only faster but has a flexible development process along with scalable benefits. But that does not make JAVA any less of a master. There are tons of areas where JAVA still rules as a backend. 

All we need to do is, look at their individual benefits and then decide which backend technology can serve better for different projects?

But before we make a head-to-head comparison of NodeJS and JAVA, let’s take an overview of both the technologies to clear our basics. 

JAVA: An Overview

JAVA is a traditional programming language that works on object-oriented concepts, after C++. It follows the principle of “write once and run anywhere,” which states that the code once written can be used anywhere without the need for recompilation. 

The language is highly secure and stable, and it is perfect for eCommerce, transportation, FinTech, and banking services. There are four significant platforms of JAVA:

  1. JAVA FX
  2. JAVA, Standard edition (JAVA SE)
  3. JAVA, Micro edition (JAVA ME)
  4. JAVA, Enterprise edition (JAVA EE)

Currently, JAVA is empowering the web with almost 3.4% of the websites. It is also ranked amongst the top 5 master programming languages. 

NodeJS: An Overview

NodeJS is a JavaScript-based runtime environment working as a server-side open-source tool. NodeJS runs on the single-threaded technique that delivers scalable and high-performing results. You can easily extend the capability of your project by using NodeJS based frameworks like Socket.io, Meteor.js, and Express. 

NodeJS is developed in push-based architecture. Hence, the framework is expert in making SPAs (Single Page Applications), API services, and complex websites. 

Currently, more than 43% of enterprises hire NodeJS developers to develop business-grade applications.  

NodeJS is also rated as one of the most loved tools in the Developer survey report of 2020. 

Now that we are familiar with both the backend technologies let’s quickly get into the comparison mode. For a balanced comparison, we will use some significant metrics such as: 

  1. Performance
  2. Architectural structure
  3. Testing

Performance comparison of Java and NodeJS:

Performance is an unavoidable factor in any website or application development. 

Java- 

  • Programs written in Java are in the form of Byte codes which state that the language performs much better than any other traditional technology. 
  • These byte code instructions are quickly taken by the virtual machines, hence, resulting in smooth performance. 
  • To deliver high-performing applications, Java has also implemented components like a “just-in-time compiler.” 

NodeJS- 

  • NodeJS is massively popular for its non-blocking and asynchronous structure that helps in creating a perfect environment to run micro-operations. 
  • These backend operations process in multithreading; hence, they do not impact the main application thread. 
  • NodeJS utilizes a V8 JavaScript engine, which delivers fast results in the least time possible. 

 Architectural Structure:

When it comes to choosing a framework, every product owner tries to avoid strict guidelines or architectural protocols. Let’s see which one is more flexible in terms of architecture. 

Java-

  • With Java, developers tend to follow the MVC (Model View Controller) architectural pattern.
  • This facilitates hassle-free testing and easy maintenance of code. 
  • Since developers work individually on the design, one change in a module doesn’t impact the entire application. 
  • Hence, you save yourself from a lot of rework and documentation processes with Java. 

NodeJS- 

  • NodeJS is modern and advanced and leverages a single-threaded event loop architectural system.
  • This system manages multiple concurrent requests altogether without interfering with the performance. 
  • NodeJS also allows you to follow the MVC architecture pattern to ease out onboarding issues in the code. 
  • The best part is that the technology encourages asynchronous communication between the components. 

Testing:

To be able to work without any defects or glitches is a blessing to development projects. Customers today expect continuous and fast-loading websites and applications. Let’s see which one passes the testing phase efficiently. 

Java- 

  • Developers can easily create test cases with Java. 
  • These cases include grouping, sequencing, data-driven features. In addition to this, one can also create parallel tests. 
  • The language is compatible with multiple testing tools and frameworks like- FitNess, Apache JMeter, Selenium, JUnit, etc. 

NodeJS-

  • NodeJS has rich debugging capabilities for competency testing. 
  • It creates a sound ecosystem for applications with tools like Mocha, Jasmine, AVA, Lab, etc. 
  • The critical feature is compliance with testing libraries like Chai and Mocha to provide a seamless user experience. 

Several other factors like scalability, database support, and microservice architecture can help weigh the benefits of both the backend technologies.  

But the primary dilemma is when and where to use Java and NodeJS in your projects. 

You can consider NodeJS under these conditions:

  1. Your application supports web streaming content. 
  2. You require a performant single-page application.
  3. Your app should contain efficient data processing capabilities. 
  4. You expect a real-time multi-user web app. 
  5. Your project is a browser-based gaming app. 

You can consider Java under these conditions:

  1. You want to develop an enterprise-grade application. 
  2. You are looking for rich community support for accessible services. 
  3. Your business is Big Data or eCommerce. 
  4. You need mature technology with security features. 
  5. You wish to develop a cryptocurrency application with security functions. 

Conclusion

Since every technology serves individual features and benefits, there is no master to rule all. Thus, everything relies on your project requirements. We hope the above factors helped you overcome your backend tech dilemma. If you wish to explore more technical insights, you can hire full-stack developers that are expert in NodeJS and Java. They might make your development process even smoother. 

Source Prolead brokers usa

Pro Lead Brokers USA | Targeted Sales Leads | Pro Lead Brokers USA Skip to content