Search for:
become a certified data scientist with these data science certifications
Become a certified data scientist with these data science certifications

Worldwide the necessity of data science has become very vital in many industries, they are using it to grab valuable insights to stay ahead of the competition. Each industry has a massive amount of data that they don’t know what to do with it. The need for professionals in data science has grown immensely in all industries because only they can understand the data.

People who choose a career path in data science can prove their skills in big data platforms by doing certification programs through several learning institutions that offer certified data scientists both online as well as offline.

If one wants to get certification in data science, then there are many ways they can choose from. In this article, let’s dive deeper and know the best certifications that are in high demand to be an expert in data science.

SAS offers multiple data science certifications that are mainly focused on SAS products. One among them is SAS Certified Big Data Professional Using SAS 9 offers registrants insights into Big Data with the help of a variety of open-source tools and SAS Data Management tools. They will be using intricate ML models to create business recommendations for deploying the models at scale with the help of a robust and flexible SAS environment. To attain a SAS Certified Big Data Professional Using SAS 9 the applicant is required to pass all 5 exams that consist of short-answer, interactive questions, and a mix of multiple-choice. These are the following five exams:

· SAS Certified Big Data Professional:

  1. SAS Big Data Programming and Loading
  2. SAS Big Data Preparation, Statistics and Visual Exploration

· SAS Certified Advanced Analytics Professional:

  1. SAS Advanced Predictive Modeling
  2. Predictive Modeling Using SAS Enterprise Miner 7, 13, or 14
  3. SAS Text Analytics, Time Series, Experimentation and Optimization

DASCA is an industry-recognized certification body, which provides certifications for senior data scientists. These certifications provide professionals best acumen and capabilities to anticipate and appreciate the requirement to deploy the latest Data Science techniques, tools, and concepts to manage as well as to harness Big Data across various verticals, environments, and markets. DASCA tests every person’s ability with the world’s most robust generic data science knowledge framework. The certification programs include a complete range of essential areas of knowledge. It approaches, initiatives and programs work toward developing every professional’s knowledge to address the challenging objectives of Big Data stakeholders globally. 

Data Science professionals across 183 countries can take DASCA certification exam. Take these certification programs and study from the most advanced Big Data learning resources ever. Its certifications are based on the renowned and comprehensive Data Science Body of Knowledge (DASCA-DSBoK™) designed around the seminal Data Science Essential Knowledge Framework (DASCA-EKF™).

Being a certified data scientist professional will help you to perfect for reaching horizons of information, specially designed for big data engineers, big data analysts, and data scientists. The following are the certifications: The following certifications are:

· Data Scientist Certifications

DASCA Data Scientist Certifications address credentialing needs of senior, accomplished professionals that specialize in managing and leading big data strategies and programs for firms and have proven competence in leveraging big data technologies for generating mission-critical information for firms and businesses. 

The SDS™ credential is a perfect proof that an individual has taken a massive step in mastering the field of data science. The skills and knowledge one can attain by doing this certification will set them ahead of the competition. This credential program has five tracks, which will appeal to various applicants — each track has different prerequisites in terms of degree-level, work experience and requirements to apply. 

  • Principal Data Scientist (PDS™)

This credential consists of three tracks for professionals with 10 or more years of experience in big data. The exam covers basics to advanced data science concepts that include big data best practices, business strategies for data, developing cross-company support, ML, NLP, scholastic modeling and more.

SDS™, PDS™ credentials exam duration is 100–minutes online exam. And a complete exam preparation kit will be offered by DASCA.

The Dell EMC Education Services provides Data Science and Big Data Analytics Certification to evaluate the in-depth knowledge of a person in data analytics. The exam especially lays emphasis on analyzing and exploring data with R, data analytics lifecycle, creating statistical models, choosing accurate data visualization tools, and applying several analytic techniques, Data Science aspects like Natural Language Processing (NLP), random forests, logistic regression. There are no specific pre-requisites to enroll in this certification program.

By doing a certification program is very useful as it ensures to improve the skills and a person can be a valuable asset to the company in which they work in. Certifications are the perfect investment when an individual wants to grow in their respective careers.

Source Prolead brokers usa

instant grocery delivery is following a data driven path to survive part 1
Instant Grocery Delivery Is Following a Data-Driven Path to Survive (Part 1)

Instant Grocery Delivery is the startup hype of the year in Europe. You select a few groceries via the shopping app, pay via Paypal, and 10 minutes later, a bike courier is at your door with your purchases. It’s a business model that spreads magic among the users. A few months after launch, I know friends who do almost half of their shopping this way. It’s a multi-billion dollar idea like Uber. A business model that is so easy to explain and still magical? But there are also apparent problems with highly disruptive business models like this:

  • Overworked bike couriers going on strike.
  • Issues with the districts because of noise pollution from warehouses located in the middle of residential areas.
  • A low margin on products and little price tolerance from customers.
  • Business growth is occurring geographically from district to district and city to city for companies like Gorillas.
  • The colossal competition (I count 12 providers in Germany alone by now).

The US company GoPuff, founded in 2013, is considered a pioneer for the startups Gorillas, Flink, Zap, or Getir. GoPuff makes data-driven decisions to minimize the risks mentioned above. To boost these ambitions, GoPuff recently acquired the data science startup RideOS for $115 million. In markets with aggressive pricing, for many direct competitors and existing substitutes building a competitive advantage quickly via technology has proven to make the business model more efficient. A bold but also expensive move by GoPuff. In this article, I will show how to integrate within a day geospatial analytics for an instant grocery delivery use case without spending multi-millions on a startup acquisition.

But how exactly can we think of data-driven decision-making for instant grocery delivery? Assets that are important to optimize are:

  • Where should I set up warehouses?
  • What is the optimal size of the drivers fleet?
  • What are the preferences of target customers in the region?
  • How big is the market potential overall?

In this article, we ask ourselves the fictitious question, should an instant grocery delivery company go to the outlying Berlin district of Pankow? We do this using external data sources that can scale globally and use the data integration framework of Kuwala (it’s open-source). With Kuwala, we can easily extract scalable and granular behavioral data in entire cities and countries. Below you see activity patterns at grocery shops in Hamburg. We will make use of some of the functionalities to derive insights from the described areas.

[embedded content]

We start our analysis by comparing the data on a neighborhood of Pankow with the neighboring part of PBerg (“Prenzlauer Berg”). The two selected areas are similar in size (square kilometers). Using the Kuwala framework, we first integrate high-resolution demographics data. On a top-level view, they are comparable to each other in total and within subgroups of gender and age.

In the next step, we analyze the current status quo of Point-of-Interests regarding groceries (e.g., supermarkets). We build the data pipeline on OpenStreetMap data and extract categorization and name as well as price level. We combine that data with hourly popularity and visitation frequency at those POIs.

We find that Pankow has significantly fewer supermarkets per square kilometer. In addition, it shows that the price level of grocery stores is much higher in PBerg. Furthermore, we identify that groceries in Pankow are +10% more visited during the evening than PBerg. In summary, we can assume now that people in Pankow…

  • … travel longer to supermarkets on average.
  • … often spend more time in the evening hours in supermarkets.
  • … have a lower price elasticity towards groceries.

Companies can now use that information in a market entry strategy. An aggressive cashback activation convinces people in Pankow to skip the evening shopping in a supermarket for a comfortable way of receiving the purchases right at their door.

We aggregated the high-resolution demographics data on an H3 resolution of 11 (based on raw data representing 30×30 meter areas). By that, we can analyze in-depth the distribution of people in a comparatively small district.

  • We can spot areas with a high population of the young target demographic and less reachable options for doing groceries.
  • In addition, we can spot micro-neighborhoods with a low population density, which makes those areas a perfect spot to open a warehouse, close enough to service areas and further away from people who could be disturbed by noise.

In the next part of this article, I will share some more advanced algorithms to identify over- and under-served areas and put everything at scale by comparing entire cities and the popularity of those places. If you want to discuss geospatial topics with us in the meanwhile, I recommend joining our slack community.

Source Prolead brokers usa

why instant grocery delivery should follow a data driven path like uber to survive part 1
Why Instant Grocery Delivery Should Follow a Data-Driven Path Like Uber to Survive (Part 1)

Instant Grocery Delivery is the startup hype of the year in Europe. You select a few groceries via the shopping app, pay via Paypal, and 10 minutes later, a bike courier is at your door with your purchases. It’s a business model that spreads magic among the users. A few months after launch, I know friends who do almost half of their shopping this way. It’s a multi-billion dollar idea like Uber. A business model that is so easy to explain and still magical? But there are also apparent problems with highly disruptive business models like this:

  • Overworked bike couriers going on strike.
  • Issues with the districts because of noise pollution from warehouses located in the middle of residential areas.
  • A low margin on products and little price tolerance from customers.
  • Business growth is occurring geographically from district to district and city to city for companies like Gorillas.
  • The colossal competition (I count 12 providers in Germany alone by now).

The US company GoPuff, founded in 2013, is considered a pioneer for the startups Gorillas, Flink, Zap, or Getir. GoPuff makes data-driven decisions to minimize the risks mentioned above. To boost these ambitions, GoPuff recently acquired the data science startup RideOS for $115 million. In markets with aggressive pricing, for many direct competitors and existing substitutes building a competitive advantage quickly via technology has proven to make the business model more efficient. A bold but also expensive move by GoPuff. In this article, I will show how to integrate within a day geospatial analytics for an instant grocery delivery use case without spending multi-millions on a startup acquisition.

But how exactly can we think of data-driven decision-making for instant grocery delivery? Assets that are important to optimize are:

  • Where should I set up warehouses?
  • What is the optimal size of the drivers fleet?
  • What are the preferences of target customers in the region?
  • How big is the market potential overall?

In this article, we ask ourselves the fictitious question, should an instant grocery delivery company go to the outlying Berlin district of Pankow? We do this using external data sources that can scale globally and use the data integration framework of Kuwala (it’s open-source). With Kuwala, we can easily extract scalable and granular behavioral data in entire cities and countries. Below you see activity patterns at grocery shops in Hamburg. We will make use of some of the functionalities to derive insights from the described areas.

[embedded content]

We start our analysis by comparing the data on a neighborhood of Pankow with the neighboring part of PBerg (“Prenzlauer Berg”). The two selected areas are similar in size (square kilometers). Using the Kuwala framework, we first integrate high-resolution demographics data. On a top-level view, they are comparable to each other in total and within subgroups of gender and age.

In the next step, we analyze the current status quo of Point-of-Interests regarding groceries (e.g., supermarkets). We build the data pipeline on OpenStreetMap data and extract categorization and name as well as price level. We combine that data with hourly popularity and visitation frequency at those POIs.

We find that Pankow has significantly fewer supermarkets per square kilometer. In addition, it shows that the price level of grocery stores is much higher in PBerg. Furthermore, we identify that groceries in Pankow are +10% more visited during the evening than PBerg. In summary, we can assume now that people in Pankow…

  • … travel longer to supermarkets on average.
  • … often spend more time in the evening hours in supermarkets.
  • … have a lower price elasticity towards groceries.

Companies can now use that information in a market entry strategy. An aggressive cashback activation convinces people in Pankow to skip the evening shopping in a supermarket for a comfortable way of receiving the purchases right at their door.

We aggregated the high-resolution demographics data on an H3 resolution of 11 (based on raw data representing 30×30 meter areas). By that, we can analyze in-depth the distribution of people in a comparatively small district.

  • We can spot areas with a high population of the young target demographic and less reachable options for doing groceries.
  • In addition, we can spot micro-neighborhoods with a low population density, which makes those areas a perfect spot to open a warehouse, close enough to service areas and further away from people who could be disturbed by noise.

In the next part of this article, I will share some more advanced algorithms to identify over- and under-served areas and put everything at scale by comparing entire cities and the popularity of those places. If you want to discuss geospatial topics with us in the meanwhile, I recommend joining our slack community.

Source Prolead brokers usa

1e2808b0 ways to scale customer engagement with facebook chatbots in 2021
1​0 Ways to Scale Customer Engagement with Facebook Chatbots in 2021

Introduction

Automated messages like “Hi, how may I help you?” are quite familiar when a customer requires some service online. Want to know what’s that? These are the chatbots for businesses that have improved customer service by making service available round the clock, and 64%  of online users are satisfied with the automated system.

As a fact, chatbots will most likely take care of 85% of all customer dealings by early 2022. Almost 50% of businesses prefer chatbots to mobile apps, proving it a viable future of customer service. Let’s see what they are!

1. Boost Customer Service

The automated messaging feature of Facebook Chatbots can garner an immediate response from the customers. A well-developed Facebook Messenger Chatbot with an inbuilt cache of FAQs provides quick answers to customer queries. Moreover, chatbots can offer multiple-choice responses to understand the specific needs of a customer.

The quick response from chatbots permits customers to make their purchase decision faster and lessens the probability of shifting to a competitor. 

Vital Statistics show that there are more than 300,000 active bots on Messenger. This implies that businesses can thrive well in the competition with the help of Facebook Chatbots.

Currently, conversational marketing through Facebook has a lead of roughly 70% higher open rate than email marketing.

A use case is Domino’s chatbots on Facebook. The chatbots allow customers to choose their favorite dish from a plethora of items and place orders. The chatbot links the customer’s Facebook account to their Domino’s account. Customers can track their orders, seek support, and do many more things. These digital innovations have helped Domino’s increase their customer base by allowing them to have a good experience on their platform.

2. Offer Personalized Recommendation

Customers can view online catalogues of your store within the Messenger application. For instance, Shopify offers e-commerce stores with The Messenger Sales Channel engineered by chatbots that enables buyers to browse products through Messenger. Once buyers make a purchase decision, they will be automatically redirected to your website. The messenger chatbots via the sales channel let companies send automated notifications regarding their orders. Such chatbots can come as a blessing for small businesses. 

Few brands move beyond customers’ expectations and use chatbots to recommend during purchase. Rather than searching many products independently, the customers may ask for suggestions as per their choice of products. Conversational AI plays a significant role in this.

Babylon Health, a renowned British online subscription service, has taken the help of chatbots to provide consultation based on the patient’s medical history and can contact patients via video call from a physician.

3. Collect Feedback Seamlessly

Your Facebook chatbot can effectively conduct a brief survey for customer feedback in a conversational manner, almost like human interactions. Thus, in a few clicks, your company can gather vital information and form an idea of buyers’ response to your brand, products, or services.

Chatbots save your customers’ time, for they just need to click rather than typing out. Prepare a satisfactory scale or few statements for the customers to choose from. A meticulously-designed chatbot boosts the process of getting feedback from your customers.

Take the use case of a typical survey chatbot. The Facebook chatbot asks the buyer if they would like to participate in the survey. Once the buyer gives their consent, the survey starts instantly. The buyers don’t even have to take the pain of typing anything. They can just select from the ‘options’ furnished below the question to progress through the survey. On top of that, GIFs, images, and videos displayed above the questions make the survey fun and less tedious.

WotNot provides you with some wonderful ways to create Facebook chatbots for business. These chatbots are based on conversational AI, and you can deploy them for flawless feedback collection. 

4. Makes Scheduling Appointments Easier

You can use Facebook Messenger chatbots to schedule appointments for your customers. Booking a slot for an appointment through a bot lets customers schedule appointments anytime, without the hassle of contacting a customer service representative.

The beauty brand Sephora enables customers to fix appointments using the Facebook Messenger chatbot. By opting for “Book A Service,” the buyer is directed to a trail of questions from the Facebook chatbot. It helps them select the location and services they would like to schedule. Finally, the chatbot generates a scheduling pop-up that lets customers select a particular slot available at the store. Once the time is fixed, Sephora collects the email and name of the user from Facebook to finalize the appointment. 

An 11% upsurge was seen in in-store booking conversation rates after they introduced the scheduling chatbot. Allowing customers a separate way of scheduling their appointments through Messenger also helps the in-store employees to converse and connect more with customers on the spot personally. 

5. Enhance Brand Awareness

Your brand’s Facebook chatbot enables customers to know about what your company does, especially when interacting with people who have forayed into your brand’s ambit of influence. This is an impactful way to capture customer attention, moving them down your sales funnel because marketing via Facebook Messenger has 10-80 times better engagement than email and incurs 70-80% open rates on an average.        

You can directly present your brand as a part of your chatbot conversation by telling people about your business’s latest event, which might have been an exciting project. In this way, audiences can stick to your brand advertisement.

An interesting use case is the Upbeat Advertising Agency, whose Facebook chatbot allows users to develop awareness about the agency directly as part of its bot conversation. The agency messenger bot gets Facebook users started by letting them know about a recent event or an exciting project that Upbeat has been a part of. Such tactics are likely to capture the audience’s attention.

6. Influence Customers to Visit Your Product Page

Once you warm your audience up via Facebook Messenger, you can start directing them to your product pages. As Facebook messenger bots can be conversational and amiable while communicating with their target audience, the whole interaction looks pretty natural and not like a sales pitch. However, if you do not want to direct people to your product pages in this way, include a shop button to the menu, but a disciplined conversation does help.

Burberry, a luxury brand, has a well-organized bot conversation facility on its Facebook page that provides visitors with the option to browse its products in both the menu and the conversation.

With Facebook messenger providing highly engaging and personal communications, 40 million businesses have taken to this platform to set up amiable interactions with potential customers and increase their sales. Thus, Facebook Chatbots can play a crucial role insofar as effective customer engagement and conversational marketing is concerned.

Facebook messenger has been prospering exponentially over the past few years and became a well-performing mobile platform rather than simply an app. As a fact, about 3,00,000 chatbot developers have joined the Facebook messenger app. 

7. Enable Shopping Directly Via Facebook Messenger

A “Buy Now” button lets customers enjoy a seamless buying within the Messenger app, cutting short the buyer journey and increasing conversion rates. The Facebook chatbot will fill out the form automatically with users’ data during this quick checkout process.

Beauty Gifter, the Facebook messenger chatbot for L’Oréal, aims to enhance personalization. The messenger chatbot gets to know every buyer’s needs and choices and makes customized product recommendations from 11 L’Oréal brands, integrating with L’Oréal’s e-commerce system for checkout. Beauty Gifter chatbot statistics prove 27 times better engagement than email, 31% detailed profiling, and 82% buyers loved the experience.

Around $8 billion will be saved by 2022 from businesses using chatbots, as per IBM. Also, 85% of customer conversations with businesses will occur via chatbots by 2020, as per Gartner, and 53% of customers would heartily text than call a customer care agent. 

8. Notify Customers With Broadcasts to Increase Customer Retention

Facebook Messenger chatbots for business can convey your brand’s message by making it effectively engaging, which can compel the target audience to make the right decision. Its high click-through and open rates will bear the right results for your brand’s marketing intent. A bland email template like “Your Cart Is Waiting” might not have what the subscriber wants. Hence, the subscriber will most likely not open the mail. However, crisp and short broadcasts automated by Facebook chatbots using friendly emojis and stickers make them more persuasive.

Observe your organization’s internal style of pitching, customer care terminology, advertising strategy, etc. This will provide you with a strong conscience of your voice of exposition and the attitude to use in your Messenger broadcasts.

A simple use case is a Facebook chatbot forging relationships by sending broadcasts to customers to educate them about your brand. For example, if you own an athletic store, your target customer base must be people who love running a marathon and you aim to sell more and more sneakers.

Use your Facebook chatbot to create a sequence of messages, with each message consisting of an actionable tip to convince them to get started. This implies your sequence gives them insightful information on how to run their first marathon with the expectation that when they need to purchase running shoes, your brand is the first one they would think of buying from. 

9. Adding Augmented Reality to Customer Experience 

Since customers have been opening up to chatbot-based communication, companies are going one step ahead to include Augmented Reality (AR) and conversational AI to make the customers’ experience more immersive. Companies like POND’S, Sephora, Ikea, etc., incorporate AR and conversational AI in their chatbots to make them more targeted and precise.

Advantages of using Augmented Reality and Artificial Intelligence are their extraordinary selling point, improved experience with a personalized experience to the customers, and the enhanced prospects of earning gravity-defying revenue.

One of the famous use cases is that of Victoria Beckham, who is among the many fashion designers to incorporate Augmented Reality as an integral part of her chatbot, producing impressive results. She owns one of the best Facebook chatbots we have ever seen. She uses her Facebook bot to enable users to use their camera to try on her sunglass collection to see if or not they will suit them. This is an innovative tactic to boost conversions.

10. Generate Leads

Chatbots in Facebook Messenger can add an edge to your sales approach. By communicating with users, you can know their preferences and categorize them and identify your leads. All you need to do is adjust your bot’s situations to your sales funnel and develop a positive buyer experience.

You can use your Facebook chatbots to find out the challenges your potential buyers face regarding a product or a service by asking some multiple-choice questions and then offer valuable suggestions and enable accessible contact. By entertaining consumers through a chatbot, you can engage your customers.

Bots can contribute to your business by nurturing leads. You can send regular automated messages personalized with a follow-up or an interesting new piece of content that can keep your customers hooked on. You can build a formidable bond with potential buyers and increase your leads.

Conclusion

While revising your social media policy, do not forget to include Facebook Messenger Chatbot for business in it. Begin with simple FAQs and automated answers to enhance the quality of customer support, and add more options over time, e.g., product recommendations, content distribution, and events. Do not flinch while interacting with your customers in a more engaging discussion, albeit through Facebook chatbots. In this way, you can develop better and more long-lasting relationships with them.

Try Wotnot for creating highly advanced no-code chatbots for business/small business. Wotnot’s state-of-the-art analytics dashboard lets your brand understand customer insights more deeply and use this knowledge to strengthen conversational marketing.

This article is already published here

Source Prolead brokers usa

5 ways to power up your data science use in small business
5 Ways To Power-Up Your Data Science Use in Small Business

Ever thought about what would have been our world without data science? Many and many things would have been different. Understanding customers has been only possible in person; experience would have been the only critical factor to take new risks without knowing or predicting the ultimate outcomes. 

Thanks to AI and machine learning, they stand today: tracking and analyzing data can never be this easy without them. With a few clicks, one can generate very accurate data using various filters to differentiate the odds. 

Introduction

But the small businesses are always in the spotlight, from being the best in the locals to opening their branch in the big cities and continuing their legacy for which they are best known. Without the right set of data, they also suffer, bear a huge loss and even diminish. 

Without the right team and tools, sustaining in the highly competitive business world is highly sturdy. If you own a small business, you have to become more cautious and think about leveraging data science into your small business and scale it.

Having said that, here are five expert tips to power up data science use in your small business. Let’s dive in. 

Every business has its strategies, no matter how big or small they are: and that makes them stand unique in the market. With effective branding, advertisement, and customer experience, along with the quality of products they deliver, they establish their position in the market. That’s how one brand differs from the other.

The Data Science Strategies That You Need To Scale-up Your Small Business Are:

Hire A Data Scientist With 2 – 3 Years Of Experience (In Your Relevant Industry)

When you are a business, many employees work under you. Treat them so well that you let your employees become the voice of your brand to attract new and existing customers. Let’s say you run a SaaS startup; hire a data scientist who has a good 2-3 years of experience as a data scientist and has already been working in the SaaS industry. 

Then he always has a good understanding of data that your companies require; just let him know your objectives and goals, and he can help you in better ways. He can find and analyze new trends, get the customers’ preferences, and do many things. However, hiring can be costly when you have the best professional in your team. But if you feel you don’t have that much budget, upskill one of your employees, or work with a consultant who can guide you in the right direction. 

Using Right Set Of Data To Make Better Decisions

The right set of data matters a lot if you want your data to be accurate. And this dataset shall be packed with concrete evidence and statistics that you need for your business. For this purpose, data wrangling is necessary to differentiate the odd ones. 

Therefore the best ways to look into data are:

  • Collecting survey reports to identify products, services, and features. 
  • Conducting user surveys to find out how well they relate to your product.
  • When launching new products to understand how a product might perform in the market
  • Determining business threats and new opportunities 

Right Tools And Softwares That Makes Your Work Super-Easy

Gathering data and analyzing them is a humongous task. It can kill all your productivity manually and even give you a headache when your work is not over on time. And when you do manually, there are high chances your results won’t come accurate, and you might miss a slice of data for the same reason. 

Python and its libraries are an excellent tool for data science that can do a lot of work in minimum time. But having one data visualization tool either from Tableau or Power BI will help you understand unstructured data and make complex decisions easy-going. 

Thus, you master MYSQL, Excel,  Python, R, Tableau, Microsoft Azure, Apache Spark, Big Data, and Hadoop to get most of your work done. 

Identify And Target New Customers Having Existing Customers 

You’ll have many existing customers speaking about your business who love what you sell and come back to you for their next purchase. But what about new customers, how can you target them better, what they like most, and so many questions. 

From identifying where most of your customers come from, how they interact with your products, how your products can give a permanent solution to one of the problems. And best customer service wins you many new customers through word of mouth. 

The best way to get insight is by running ads for local and nearby places and diving into a google analytics dashboard that gives you a complete understanding of how your customers interact with the ads they see. Their location, area of interest, and much more. And you can get it from the marketing team, combine it with your data science, and produce a robust report.

Discover New Trends And Opportunities To Scale-up Your Business 

To be at the top of the business, you need to follow the ongoing trend and look for the opportunities that your competitors lag. When you fill those gaps, you build trust in your customers’ minds. 

As a data scientist, your primary work is to do research, come up with concrete ideas, and plan effectively. Suppose you want to sustain and be at the top of the business. When you do thorough research using advanced tools, you find better opportunities. Try them out to discover how they work for your company (necessarily a dry run at least) to collect your customers’ feedback. 

If it works, then great. If not, you can look for even better ideas. business is all about taking risks, but calculated ones (so it won’t affect you much.)

Final Words 

Taking new and calculated risks is a new approach to grow your business fast. But when you don’t research and invest, you face a significant loss, and it’s tough to recover. And if you run a small business, it’s not like your business can never go big. 

You can make it big, but the right strategies, mindset, and team will help you achieve the same. This blog taught about five best practices to power up your data science use in your small business. Let’s know your thoughts and how you would implement data science in your small business, and which one you find most helpful. 

Source Prolead brokers usa

three steps to addressing bias in machine learning
Three Steps to Addressing Bias in Machine Learning

Data is powering this century. There is an abundance of data coming from the digitized world, IoT devices, voice assistants like Alexa & Siri, fitness trackers, medical sensors to name a few. Data Science is becoming the center of growth hustling sectors like healthcare, logistics, customer service, banking, finance., etc. AI and Machine Learning are now mainstays in boardroom conversations and with this data-centricity also comes the big question around governance and ethics in data science. 

Step 1. Acknowledge Bias 

Are we ethically responsible for handling data?

Everyone is responsible for handling data with utmost care. Bypassing ethical data science just for monetary gain fosters bias and stereotyping. Similarly, cross-validating real-world data against the biased data results in an inconsiderate business decision reducing not just monetary gain but most importantly its reputation and customer loyalty. Every enterprise is responsible to grow its business by cultivating togetherness among communities by being more inclusive and filtering out any unconscious bias.

What are the effects of unethical data science?

Data Privacy is becoming a major concern with more and more machine learning models learning our digital footprint and predicting our future necessities whether we like it or not. Legislations like GDPR(Europe), Personal Data Protection Act(India), California Consumer Privacy Act (CCPA) stresses the importance of data privacy, protecting digital citizens from dangerous consequences of misused data.

Micro-targeting based on consumer data and demographics is influencing the action of the targeted consumer segments. With an abundance of data, it is becoming harder and harder to differentiate truth from falsehood. Micro-targeting without the proper understanding of data and its source leads to more harm than good.

Healthcare prediction failures, like IBM Watson, leads to irreversible consequences. Right now,  the healthcare industry is undergoing a major revolution with Artificial Intelligence. The success of AI in healthcare depends on a one-team approach with transparent discussion from a diverse set of leaders from both healthcare and data science.

Facial Recognition Softwares are known to falsely classifying people with criminal intent based on one’s skin color as the ML models are trained with predominantly white faces.  Multiple facial recognition applications are available in the market. But the success of the application depends on the diverse set of data used in training the facial recognition models.

Step II. Understand Bias

1. Know the Bias Types

It is very crucial to understand the different bias types and be conscious of their existence to handle data ethically. Bias in Machine Learning can be classified into Sample, Prejudice, Measurement, Algorithm, and,  Exclusion Bias

a. Sample Bias

Sample Bias arises from misinformed information where training data contains either partial information or incorrect information. For instance, predicting the spending activity of a customer based on their social feeds and not from relevant payment platforms leads to sample bias.

b. Prejudice Bias

“Our environment, the world in which we live and work is a mirror of our attitudes and expectations –Earl Nightingale

Being prejudice with preconceived opinion cause more harm not just to the business, but also to the society and well-being of our future mankind. It takes immense strength to acknowledge and eradicate any unconscious bias. 

c. Exclusion Bias

Everyone is unique with their own abilities and strength. Just because some of us do not follow the norm, are by no means subjectable to exclusion. Each one of us has our own unique qualities to contribute. Enterprises not adopting inclusive policies will be out of the market in a short time. 

d. Algorithm Bias

Machines do not understand bias. The erroneous assumptions often made when selecting the datasets and algorithms either consciously or unconsciously, lead to algorithm bias.

e. Measurement Bias

Measurement Bias usually happens when a model favors certain outcomes over others. A model predicting the sales target of consumer products that will double in the next quarter based on past sales history will favor items whose prices were marked low over others.

Step 3. Eliminate Bias 

Eliminating Bias is not a one-time activity, rather a continuous process. Bias elimination starts from selecting the right algorithm and setting the data governance team with all the members involved in the ML project lifecycle including the business team, data scientists, and MLOps team.

Models are less prejudiced if the test datasets are from the real world rather than from the sample set. Real-world data also offers the advantage of being diverse and inclusive in nature as the data is from real customers. But at the same time, including data from active customers alone will not solve the inclusion problem. Such unconscious bias can be detected by having Human-in-the-loop along with continuous monitoring. 

Summary

With data growing exponentially and legislation controlling data usage, it becomes crucial to exercise data consumption for common goodness. Fostering togetherness by collaborating with people from different sectors, being socially responsible and accountable for ethically using data will become the foundation for the successful AI revolution.

A version of this blog was originally published here – http://predera.com/reimagining-ai-building-togetherness-with-bias-m…

Source Prolead brokers usa

data ingestion best practices
Data Ingestion Best Practices

Data ingestion is required for organizations and businesses to make better decisions in their operations and provide better customer service. Businesses can understand the needs of their stakeholders, consumers, and partners through data ingestions, allowing them to stay competitive. Data ingestion is the most effective way for businesses to deal with tons of inaccurate and unreliable data.

How is data ingestion done?

It is performed in various ways. Top of these ways include;

  • Real-time  – Ingesting data in real-time is also known as streaming data.  It is the most crucial method of ingesting data, especially when the information is time-sensitive. In this method, data is retrieved, processed, and stored in real-time for real-time applications, such as decision making.

  • Batch – The batch approach entails shifting data at predetermined times. This method is excellent for recurring processes, such as reports that must be generated on a regular basis, such as daily.

  • Lambda Architecture – The lambda architecture is a method that combines real-time and batch procedures. This strategy combines the advantages of the two methods. It makes use of real-time ingestion to extract information from time-sensitive data. It also makes use of batch ingestion to provide a broad view of recurring data.

Best Practices:

Self-service data ingestion 

Many organizations have multiple data sources. All of this data must be ingested before it is stored and processed. Data continues to grow in size and metrics, requiring enterprises to continue to add the resources required to manage it. If the ingestion process is self-service, it relieves the pressure to constantly expand resources through methods such as automation, and the focus is now switched to processing and analysis. The ingestion process becomes very simple, requiring little to no assistance from technical personnel.

Automating the process 

As organizational data continues to grow, both in volume and complexity, manual techniques of handling and processing it can no longer be depended on. The need to automate every process along the way increases to see that you save time, reduce manual interventions, minimize system downtimes, and increase the productivity of the technical personnel.

Automating the ingestion process offers additional benefits including; architectural consistency, error management, consolidated management, and safety. These benefits come in handy to reduce the time taken to process data.

Anticipate challenges and planning appropriately

The imperative of any data analysis is to transform it into a usable format. As data continues to grow in volumes and type, so do the complexities of data analysis. When there is a process that can help you anticipate these challenges in advance, you will have an easier time completing the whole data processing task successfully. Data ingestion is one big process that helps you anticipate these challenges, plan accordingly in advance, and work on them efficiently as they come, without necessarily having to incur any loss of time and output.

Use of Artificial Intelligence

Making use of Artificial Intelligence concepts such as statistical algorithms and machine learning eliminates the need for manual interventions in the ingestion process. Manual intervention increases the number and frequency of errors in the process. Employing Artificial Intelligence not only eliminates these errors but also makes the whole process faster and increases the accuracy levels.

Data ingestion reduces the complexities involved in gathering data from multiple sources and frees up the time and resources for subsequent data processing steps. The emergence of data ingestion tools such as DQLabs has seen the creation of efficient options that can help businesses improve their performance and results by easing the decision-making process from their data.

Source Prolead brokers usa

solving the parsing dilemma
Solving the Parsing Dilemma

There’s a much maligned topic in web scraping – data parsing. Building scrapers would be a lot easier if the data presented through HTML wasn’t intended for browsers. However, that is the case, which means that the data extraction process has to go through several hoops before delivering results.

Parsing is part of the process. Unfortunately, it’s one of the most resource-intensive parts of the entire web scraping chain. In fact, developing a parser for a specific website is not enough. Maintaining it over time is required. Even then, that might not be the end as some complex websites might need numerous parsers to work the data out of the source.

The dilemma

Any sufficiently large scraping project has to develop their own parsers. That means dedicated time and resources to a, comparatively, low-skill task. Most of the time, developing and maintaining parsers is a task for junior developers.

However, junior developers are a highly valuable resource. Spending time maintaining and writing parsers usually barely improves their skills. In fact, it might even bring a certain level of annoyance.

On the other hand, parsing is a critical part of the scraping process. Most of the time, the data acquired is messy and unusable without intervention. Since the end goal of all web scraping, whether for personal or commercial use, is to provide data for analysis, parsing is a necessity.

In short, we have an essentially necessary process that takes up a significant portion of resources and time while not being significantly challenging or useful to the individual. In other words, it’s a resource sink. Solving such a challenge would free up a lot of highly skilled hands and brains to do greater work.

A look towards automation

If you were to approach any sensible CXO or businessperson in general with an idea to save significant time for developers, they would accept the suggestion with open arms. There’s rarely anything better than saving resources through automation.

However, automating parsing isn’t as simple as it may seem. Partly, the reason is the frequent maintenance required. Usually, the requirement arises because websites change their layouts. If they do so, the parser breaks.

Yet, predicting future layout and coding changes is simply impossible. Therefore, no rule-based approach is truly viable. Classical programming is of little help here. Manual work, as mentioned previously, is a huge time and resource sink.

There’s one option remaining that has built up a lot of hype over the past decade or so. That is machine learning applications. Parsing seems to be the perfect way to test the mettle of machine learning engineers.

Since all of HTML has a similar structure across certain categories of pages, the visual changes are decidedly small. Additionally, layout changes aren’t usually massive overhauls of an entire website. They’re mostly incremental UX and UI improvements that are implemented. While that may add to the annoyance of a developer, it’s a great candidate for a stochastic algorithm looking for similarities between trained data and new data.

Preparing for adaptive parsing

Before engaging into any machine learning project, at least these questions should be answered beforehand:

  1. What will be the limits of the model?
  2. What type of learning will be needed?
  3. What type (labeled/unlabeled) data will be used?
  4. How will the data be acquired?

Luckily, for our Adaptive Parser project at Oxylabs, we had the easiest answers to the last three questions. Since we already knew what we were looking at and for (data from specific pages), we could use labeled data. That meant supervised learning, one of the most practical and easy to execute models, can be used.

However, the true difficulty lies in answering the first question as the rest, at least partly, depend on it. Since all resources are finite, the machine learning model should be as narrow as required and as wide as possible. For us, it meant looking at how our clients are using our solutions (e.g. Real-Time Crawler) and making a decision based on data.

As we discovered through our research, e-commerce product pages were the most painful ones to parse. Generally, the source can be a bit wonky for parsing purposes. Additionally, there’s usually almost identical fields that are only sometimes available (e.g. “new price”/“old price”).

These fields can be confusing to machine learning models as well due to their similarity. However, answering the question about limits lets us set proper expectations for accuracy and the amount of data required. Clearly, we’ll need quite a bit of labeled data as we will have at least one problematic field.

Answering the final question was somewhat easier. We already knew where to pick up our examples. In fact, we could quite quickly collect a large amount of e-commerce pages. However, the strenuous part is labeling. It’s quite easy to get your hands on large amounts of unlabeled data. 

Labeling data and training

Every supervised learning dataset has to be labeled. In our case that meant providing labels for most fields in every e-commerce page and it had to be done at least partly manually. If it could be automated, someone would have already created an adaptive parser.

In order to save time and in-house resources, we took a two-pronged approach. First, we hired a few helping hands that would label fields from our soon-to-be training set. Second, we spent some time developing a GUI-based labeling application to speed up the process. The idea is simple – we spend more financial resources on manual repetitive tasks to save up time for cognitive tasks for our machine learning engineers.

After getting our hands on enough labeled data to start training our Adaptive Parser, the process is really a lot of trial and error with some strategizing peppered in between. Sometimes, the model will struggle with specific parts and some logic-based nudging will be required (or it will at least speed up the process).

Many months and hundreds of tests later, we have a solution that is able to automatically parse fields in e-commerce product pages, which can adapt to changes with reasonable accuracy. Of course, now maintenance will be the challenge, but we have shown that it’s possible to automate parsing.

Conclusion

Automating parsing in web scraping isn’t just about saving resources. It’s also about increasing the speed, efficiency, and accuracy of data over time. All of these factors influence the way businesses engage with external data. Primarily, there’s less time dedicated to working around the data and more time to working with data.

More discussions on the pressing topics around web scraping, industry trends and expert tips will be shared in an annual web scraping conference Oxycon. It will take place online on August 25-26th and the registration is free of charge.

Source Prolead brokers usa

android or ios which platform should you choose for developing your app
Android Or Ios – Which Platform Should You Choose For Developing Your App

Mobile application development is among the most consistently growing sectors in software production. There has been an increasing demand for fast and user-friendly apps in recent years. A certain statistic reveals that in the year 2020 alone it is calculated users spend an average of 87% of the total time online on mobile apps. When you are opting for any app development there are two platforms to choose from-Android and iOS. These two platforms are the leading platforms worldwide. Both iOS and Android have incredible development options. Before making the decision as to which platform you would choose to build your app you must thoroughly go through the comparison between the two. This article will give you an overview of both the platform’s perks and perils and also point out the differences. 

 After you have decided which platform you will go ahead with, the next step is to choose the developers. Whether you hire ios app developer or you hire an android app developer, make sure they are technically sound and have proper knowledge for your app development.

The Benefits and Drawbacks of iOS App Development

iOS app development is always high in demand. This is because the iOS apps all the time perform extremely well. The platform is very fast, the reliability factor is very high, very user-friendly, and very few bugs to be found in the final output of the developed app.  

An Experience that is sleek and flawless

The iOS platform provides the developers with detailed guidelines. So when you hire an ios app developer they get the detailed guidelines from the iOS platform. These detailed guidelines help the developers in the creation of a user interface. So when you hire an ios app developer they can easily create a user interface for the applications. Though this interface may sometimes be limited to a few. But on the other hand, this approach usually guarantees the security of an exceptional user experience.

The Drawbacks of iOS development

For native iOS app development, the developers require software like XCode which runs on Mac. So when you hire an ios app developer for app development for iOS Smartphones he/she will always need at least one piece of Apple technology.

Extra Demanding  requirements for App release

The Apple App Store is a bit extra demanding in comparison to Google Play Store. Even if your app did not break any rules as per the Apple store guidelines still your app can be rejected if your app is found to be not relevant or is found to be of less use. So when you hire iOS app developer make sure that he/she is well versed with the guidelines of the Apple Play Store  and the app he/she develops is well enough to be accepted on the Apple Play Store.

Less Options for Customization 

In a later stage after your app gets developed and published on Apple Play Store and you feel like customizing the app’s interface it gets restricted, so you lose the option of customizing your app. And also it would be difficult to add some new features if at any stage the app requires interaction with some other third-party software.

The Benefits and Drawbacks of Android App Development

Flexibility

Normally, Android has a less restricted environment than iOS. Regarding distribution, these applications run on any Android device. Also, the issues with hardware compatibility are not there. Thus the development process is much more flexible for Android.

Availability of huge and elaborate learning resources

The Android platform also allows a smoother development process by depending on the Java language. Java is an extremely versatile programming language that supports Windows, Linux, and Mac OS. This feature allows the developers to develop Android applications regardless of the operating system the machine is running on. Google provides a vast knowledge base for beginners, provides interactive materials, exercises whole training programs for the different levels of Android developers. 

Easy Publishing of  app

In regards to publishing the apps, Google is less lenient. Google allows the developers to post on the Google play store. Previously the review process was performed automatically within 7 hours, but now it takes up to a week for the new app developers. In spite of this new rule almost all the Android apps that are not violating the policies get approved. Moreover, the app developers pay a very small registration fee. 

The ability to go beyond the Smartphones

Developing any Android application means building software for the complete ecosystem of devices. Thus you can expand your app’s functionality. The app runs on Wear OS devices, Daydream and Cardboard VR headsets, Android Auto, and various other platforms. It also gives you the power to integrate your application into cars, smartwatches, TVs with mobile phones.

Issues with quality assurance

Fragmentation is very beneficial. It allows the developer to develop for different Android platforms simultaneously. But it makes the testing process very complicated. For the simplest of the apps, the app developers have to deliver fixes at frequent intervals. This is because the majority of users keep using the older version of the OS even after the upgrades. So hire an android app developer who is well equipped with this above feature.

Higher Cost and time requirement

Developing an android app is more time-consuming than an iOS app. Thus as the Android app takes more time in development and quality assurance the cost increases too.

Availability of more free apps affecting in-app purchases 

Android app users look for free apps, so they spend less on in-app purchases than an iOS app user does. So the return on investment is not always high.

Security issues

Android platform is an open-source platform, so there is always a chance of becoming a victim of cyber fraud. Whereas the iOS platform is a much more closed counterpart and cyber attacks are rare.

A breif comparison of  iOS vs. Android development 

Both Android app development and iOS are gaining popularity over time. Both have brighter future prospects for the next few years. Both the platforms will hold the market with the present strength and none is going to lose popularity.

If you are planning to develop an app that provides extensive additional content or a retail app that you may buy, then iOS will give you more opportunities for making a profit.

Android apps are more popular among users belonging to medical or technical fields. Whereas, iOS-based technology is popular among high-end business professionals, sales experts, and senior managers, and also the iOS built technology is preferred by the audience of higher household income along with the strive to keep up with the trends in technology.

Android development has greater global coverage; the audience is from Africa, Latin America, and parts of Europe. Whereas the iOS has the targeted audience is from Australia, North America, or Western Europe.

So which platform to choose first?

When you are deciding which platform to choose between iOS development or Android development, the following factors should be kept in account:

  • The location of the User
  • The budget you are willing to spend for the development of the app and also development time requirements.
  • You should also keep in account how much unique interface you want for the app development.

To Conclude 

Both Android operating systems and iOS operating systems dominate the market. Both have a good future prospect and outlook. Both the operating systems have an extremely large user in all existing fields.  It can be judiciously mentioned that to hire android app developer your application will solely be directed by the application you are developing and the future plans you have for it.

Source Prolead brokers usa

moving beyond the 9 to 5
Moving Beyond the 9-to-5

As the Pandemic wanes (more or less), the debate about going back to the office vs. continuing to work from home remains in full swing. Central to this debate is the question about whether it is, in fact, better for companies for people to work from an office than it is to work remotely. The answers to this can be wildly divergent, from those who believe that productive work can only be done in an office, where resources can be consolidated, and people can meet, face to face, with one another for collaboration, to those who see work better done when the workers essentially control their own schedules and workflows.

To that end, one of the fundamental questions in this debate is what, exactly it means to be productive. Productivity has been an integral part of the work environment for more than a hundred and twenty years, yet it is also something that is both poorly defined and quite frequently massively misused. To understand this, you have to go back to Frederick Taylor, who first defined many of the principles of the modern work environment around the turn of the Twentieth Century.

How Frederick Taylor Invented Productivity

 

Frederick Taylor, Genius or Con Man?

Taylor was an odd character to begin with. He was born to a fairly wealthy family and managed to get admitted to Harvard Law School, but due to deteriorating eyesight, he decided to go into mechanical engineering instead, working first as an apprentice and later master mechanic at Midvale Steel Works in Pennsylvania, eventually marrying the daughter of the president of the company while working his way up from the shop floor to sales and eventually to management.

From there, Frederick Taylor began putting together his own observations about how inefficient the production lines were, and how there needed to be more disciplined in measuring productivity, which at the time could be measured as the number of components that a person could produce in a given period of time. In 1911, he wrote a monograph on the subject called The Principles of Scientific Management, which generalized these observations from the steel mill to all companies.

Taylor’s work quickly found favor in companies throughout the United States, where his advocacy of business analytics, precision time-keeping, and performance reviews seemed to resonate especially well in the emerging industrial centers of the country. At the same time, the data that he gathered was often highly suspect – for instance, he would frequently use the measurements of the fastest or strongest workers as the baseline for all of his measurements, then would recommend that owners dock the pay of workers that couldn’t reach these levels. He also mastered the art of business consulting, pioneering many of the techniques that such consultants would use to sell themselves into companies decades later.

Productivity was one of his inventions as well, and it eventually became the touchstone of corporations globally – a worker’s output could be measured by his or her productivity: the number of goods they produced in a given period of time. However, even this measure was somewhat deceptive, first, because this measure was at least in part determined by the automation inherent within an assembly line, and in part assumed that production of widgets was the only meaningful measurement in a society that was even then shifting from agricultural to industrial, while other factors such as quality or complexity of the products, physical or mental states of the workers, or even stability of the production line were ignored entirely.


Do Not Fold, Spindle or Mutilate

Productivity In The Computer Age

Automation actually made a hash of productivity early on. An early bottling operation for beer usually involved manually filling a bottle, then stoppering it. A skilled worker could get perhaps a dozen such bottles out a minute and could sustain that for an hour or so before needing to take a break. By the 1950s, automation had improved to the extent that a machine could fill and stopper 10,000 bottles a minute, a thousandfold increase in productivity. The bottler at that point was no longer performing the manual labor, but simply ensuring that the machine didn’t break down, that the empty bottles were positioned in their lattice, and that the filled ones were boxed and ready for shipment. Timing the bottler for filling bottles no longer made any sense but still, the metric persisted.

Not surprisingly, corporations quickly adopted Taylorism for their own internal processes. People became measured by how many insurance claims they could process, despite the fact that an insurance claim required a decision, which meant understanding the complexity of a problem. Getting more insurance claims processed may have made the business run faster, but it did so at the cost of making poorer decisions. It would take the rise of computer automation and the dubious benefits of artificial specialized intelligence to get to the point where semi-reasonable decisions could be made far faster, though the jury is still out as to whether the AI is in fact any better at making the decisions than humans.

Similar productivity issues arise with intellectual property. In the Tayloresque world, Ernest Hemingway was terribly unproductive. He only wrote about twenty books over his forty years of being a professional writer, or one book every two years. Today, he could probably write a book a year, simply because revising manuscripts is far easier with a word processor than a typewriter, but the time-consuming part of writing a book – actually figuring out what words go into making up that book – will take up just as long.

Even in the world of process engineering, in most cases what computers have done is to reduce the number of separate people handling different parts of the process, often down to one. Forty years ago, putting together a slide presentation was a fairly massive undertaking that required graphic artists, designers, photographers, copywriters, typographers, printers, and so forth weeks. Today, a ten-year-old kid can put together a Powerpoint deck that would have been impossible for anyone to produce earlier without a half-million-dollar budget.

We are getting closer to that number being zero: fill in some parameters, select a theme, push a button, and *blam* your presentation is done. This means, of course, that there are far more presentations out there than anyone would ever be able to consume. and that the bar for creating good, eye-catching, memorable presentations becomes far, far higher. It also means that Tayloresque measurements of productivity very quickly become meaningless when measured in presentations completed per week.

That’s the side usually left out in talking about productivity. Productivity is a measure of efficiency and efficiency is a form of optimization. Optimizations reach a point of diminishing returns, where more effort results in less meaningful gains. That’s a big part of the reason that productivity took such a nosedive after the turn of the twenty-first century. Even with significantly faster computers and algorithms, the reality was that the processes that could be optimized had already been so tweaked that the biggest factor in performance gains came right back down to the humans, which hadn’t really been changed all that much in the last century.

A forum that I follow posed the question about whether it was better for one’s career to work in the office or work from home. A person made the comment that people who work remotely may get passed over for promotion compared to someone who comes in early and stays late because the managers don’t see how hard working the remote worker is compared to the office worker. This is a valid concern, but it brings back a memory of when I started working a few decades ago and found myself working ten and eleven-hour days at the office for weeks on end trying to hit a critical deadline. Eventually, I was stumbling in exhausted, and the quality of my work diminished dramatically. I was essentially also giving my employer three additional hours a day at no cost, though after a while, they were getting what they paid for.

Knowledge work, which I and a growing number of people do, involves creating intellectual property. Typically, this involves identifying structure, building, testing, and integrating virtual components. It is easy to tell at a glance how productive I am, both in terms of ascertaining quantity (look at the software listings or article page) as well as quality (see if it correctly passes a build process or read the process). This is true for most activities performed today. If there are questions, I can be reached by email or phone or SMS or Slack or Teams or Zoom or any of a dozen other ways. With most DevOps and continuous integration processes, a manager can look at a dashboard and literally see what I have worked on within the last few minutes.

In other words, regardless of whether you are working remotely vs. working in the office, there are ample tools that a manager has to be able to ascertain whether a worker is on track to accomplish what they have pledged to accomplish. This is an example of goal-oriented management, and quite frankly it is exactly how most successful businesses should be operating today.


The Paycheck Was Never Meant To Measure Time

The Fallacy of the Paycheck and the Time Clock

So let’s talk a little bit about things from the perspective of being a manager. If you have never done it before, managing a remote workforce is scary. Most management training historically has focused on people skills – reading body language, setting boundaries, identifying slackers, dealing with personal crises, and most importantly, keeping the project that you are managing moving forward. Much of it is synthesizing information from others into a clean report, typically by asking people what they are working on, and some of it is delegating tasks and responsibilities. In this kind of world, there is a clear hierarchy, and you generally can account for the fact that your employees are not stealing time or resources from you because you watch them.

I’ll address most of this below, but I want to focus on the last, italicized statement first because it gets into what is so wrong about contemporary corporate culture. One place where Tayloresque thinking embedded itself most deeply into the cultural fabric of companies is the notion that you are paying your employees for their time. This assumption is almost never questioned. It should be.

Until the start of the middle of the Industrial Age, people typically were paid monthly or fortnightly if they were the employee of a member of the nobility or gentry, or produced and sold their goods if they were craftsmen or farmers, or were budgeted an account if they were a senior member of the church. Often times such payment partially took the form of room and board (or food) or similar services in exchange. Timekeeping seldom entered it – you worked when there was work to be done and rested when the opportunity arose.

Industrialization brought with it more precise clocks and timekeeping, and you were paid for the time that you worked, but because of the sheer number of workers involved, this also required better sets of accounting books and more regular disbursement of funds for payment. It was Taylor that quantized this down to the hour, however, with the natural assumption that you were being paid not per day of work but for ten hours of work a day. This was also when the term work ethic seemed to gain currency, the idea being that a good worker worked continuously, never complained, never asked for too much, and bad workers were lazy and would steal both resources and time from employers if they could get away with it.

In reality, most work is not continuous in nature but can be broken down into individual asynchronous tasks of activity within a queue. It can be made continuous if the queue is left unattended too long or if the time to complete a task increases faster than the rate at which tasks are added to the queue. Office work, from the 1930s to the 1970s, usually involved a staff of workers (mainly female) who worked in pools to process applications, invoices, correspondence, or other content – when a pool worker was done, she would be assigned a new project to complete. This queue and pool arrangement basically kept everyone busy, further cementing the idea that an employer was actually paying for the employee’s time, especially since there was usually enough work to fill the available hours of the day.

That balance shifted in the 1970s and 80s as the impact of automation began to hit corporations hard. The secretarial pool had all but disappeared by 1990 with the advent of computers and networking. While productivity shot up – fewer people were doing much more “work” in the sense that automation enabled far more processing – people began to find themselves with less and less to do and made it possible for companies to eliminate or consolidate existing jobs. A new generation picked up programming and related skills, and the number of companies exploded in the 1990s as entrepreneurs looked for new niches to automate as the barrier to entry for new companies dropped dramatically.


By focusing on demonstrable goals rather than “seat-time”, organizations can become more data-oriented.

The WFA Revolution Depends Upon Goals and Metrics

Since 2000, there have been three key events that have dramatically changed the landscape for work. The first was the rise of mobile computing, which has made it possible for people to work anywhere there is a network signal. The second was the consolidation of cloud computing, moving away from the requirement that resources need to be on the premises. Finally, the pandemic stress-tested the idea of work virtualization in a way that nothing else could have and likely forcing the social adoption of remote work about a decade earlier than it would have taken otherwise.

Productivity through automation has now reached a stage where it is possible to

  1. get reliable metrics based upon work completed towards specific goals, regardless of time specifically spent,
  2. automate those tasks which do not in fact require more than minimal human intervention,
  3. get access to resources needed to accomplish specific tasks, regardless of where those tasks are accomplished
  4. provide a superior environment for meeting virtually across multiple time zones, creating both a video and transcript artifact of such meetings,
  5. provide tools for collaborating in the same way, either synchronously or asynchronously (addressing the water cooler problem)
  6. ensure that information remains secure
  7. provide a set of eyeballs on evolving situations anywhere in the world at any time

Put another way – remote force productivity is not the issue here.

Most people are far more productive than they have ever been, to the extent that it is becoming harder and harder to fill a forty-hour week most of the time. I’d argue that when an employer is paying an employee, what they should be doing is spreading out a year-long payment into twenty-six chunks, paying not for the time spent but the availability of the expertise. That the workweek is twenty hours one week and fifty hours next is irrelevant – you are paying a salary, and the actual number of hours worked is far less important than whether in fact the work is being done consistently and to a sufficiently high standard. This was true before the pandemic, and if anything it is more true today.

Businesses began in the 1970s to start pushing labor laws so that companies could classify part-time workers as hourly – this meant that, rather than having a minimum guaranteed total annual income, such workers were only paid for their time on-premises. By doing so, such workers (who were also usually paid at or even below a minimum wage), would typically be the ones to bear the brunt if a business had a slow week, but were also typically responsible for their own healthcare and were ineligible for other benefits. In this way, even if on paper they were making $30,000 a year, in reality, such workers’ actual income was likely half that, even before taxes. By 1980, labor laws had effectively institutionalized legalized poverty.

After the pandemic, companies discovered, much to their chagrin, that their rapid shedding of jobs in 2020 came back to bite them hard in 2021. Once people have a job, they develop a certain degree of inertia in looking for a new job, and often times may refuse to look for other work simply because switching jobs is always somewhat traumatic. This also tends to depress wage growth in companies, because most companies will only pay a person more (and even then only to a specific minimum) if they also take on more responsibility (in other words, new hires generally make more than existing workers for the same positions).

At the bottom of the pandemic bust, more than 25 million people were thrown out of work, deeper even than during the Great Depression. The rebound was fairly strong, however. It meant that suddenly every company that had jettisoned workers was now trying to rehire new workers all at once. For the first time in a generation, labor had newfound bargaining strength. This also coincided with a long-overdue generational retirement of the Boomers and the subsequent falloff in the number of GenXers, which overall is about 35% smaller than the previous generation. Demographic trends hint that the labor market is going to favor employees over employers for at least the next decade.

Given all that, it’s time to rethink productivity in the Work From Home era. The first part of this is to understand that work has become asynchronous, and ironically, it’s healthier that way. There will be periods of time when employees will be idle, and others where employees will be very busy. Most small businesses implicitly understand this – restaurants (and indeed, most service economy jobs) have slack times and busy times. Perhaps it is time for “hourly” workers to go back to being paid salaries again. This way, if someone is not needed on-site at a particular, sending them home doesn’t become an economic burden for them. On the flip-side, that also puts the onus on the worker that, should things get busy again, they remain reachable in one of any number of ways.

Once you move into the knowledge economy, the avenues for workers become more open. Salary holds once more here, but so too does the notion of being available at certain times. I’ve actually seen an uptick in the number of startup companies that utilize Slack as a way of managing workflow, even in service sector work, as well as indicating when people need to be in the office versus simply need to be working on projects.

I am also seeing the emergence of a 3-2-2 week: three days that are specifically set aside for meetings, either onsite or over telepresence channels such as Zoom or Teams, two days where people may be on call but generally don’t have to meet and can focus on getting the most productivity without meetings interrupting their concentration, and two days that are considered “the weekend”. When workloads are light (such as during summers or winter holidays), this can translate into “light” vacations, where people are just putting in a couple of hours of work a day during their “Fridays” and are otherwise able to control their schedules. When workloads are heavy (crunch time) the bleed even into the weekend CAN happen, so long as it’s not done for an extended period of time.

Asynchronous, goal-oriented, and demonstrable project planning also becomes more critical in the Work From Home era. This, ironically, means that “scrum”-oriented practices should be deprecated in favor of being able to attach work products (in progress or completed) to workflows – whether that’s updating a Git repository, publishing a blog, updating a reference standard, or designing media or programmatic components. Continuous integration is key here – use DevOps processes to ensure that code and resources are representative of the current state of the project and that provide a tracking log of what has been done by each member of a given team.


Micromanagement, abusive behavior, and political games – is it any wonder people are staying away from the office?

For production teams, this should be old-hat, but it’s fairly incumbent that management works in the same way, and ironically, this is where the greatest resistance is likely going to come from. Traditional management has typically been more face-to-face in interaction (in part because senior management has also traditionally been more sales-oriented). The more senior the position, the more likely that person will need comprehensive real-time reporting, and the more difficult (and important) it is to summarize the results from multiple divisions/departments.

Not surprisingly, this is perhaps the single biggest benefit of a data-focused organization with strong analytics: It makes it easier for managers to see in the aggregate what is happening within an organization. It also makes it easier to see who is being productive, who is needing help, and who, frankly, need to be left behind, which include more than a few of those same managers.

https://www.theatlantic.com/ideas/archive/2021/07/work-from-home-be…

You cannot talk about productivity without also talking about non-productivity. This doesn’t come from people who are genuinely trying but are struggling due to a lack of resources, training, or experience. One thing that many of these same tools can do is to better highlight who those people are without putting them on the spot, and a good manager will then be able to either assign a mentor or make sure they do have the training.

Rather, it’s those workers who have managed to find a niche within the organization where they don’t actually do much that’s productive, but they seem to constantly be busy. Work from home may seem to be ideal here, but if you assume that this also involves goal-oriented metrics, it actually becomes harder to “skate” when working remotely, as there is a requirement for having a demonstrable product at the end of the day.

Finally, one of the biggest productivity problems with WFH/WFA has to do with micromanagement as compensation for being unable to “watch” people at work. This involves (almost physically) tying people to their keyboards or phones, monitoring everything that is done or said, and then using lack of “compliance” as an excuse to penalize workers.

During the worst of the pandemic, stories emerged of companies doing precisely this. Not surprisingly, those companies found themselves struggling to find workers as the economy started to recover, especially since many of these companies had a history of underpaying their workers as well. Offices tend to create bubble effects – people are less likely to think about leaving when they are in a corporate cocoon than when they are working from home, and behavior that might be prevalent within offices – gas-lighting, sexual harassment, bullying, overt racism, bosses not crediting their workers, and so forth – can be seen more readily when working away from the office as being unacceptable than they can when within the bubble.

There are multiple issues involved with WFH/WFA that do come into play, some legitimate. However, making the argument that productivity is the reason that companies want workers to come back to the office is at best specious. While it is likely more work for managers, a hybrid solution where the office essentially becomes a place where workers congregate when they do need to gather (and those times certainly exist) likely is baked into the cake by now especially as the Covid Delta variant continues to rage in the background. It’s time to move beyond Taylorism, and the fallacy of the time clock.

Kurt Cagle is the Community Editor of Data Science Central, a TechTarget property.

Source Prolead brokers usa

Pro Lead Brokers USA | Targeted Sales Leads | Pro Lead Brokers USA Skip to content