Search for:
defining productivity in the work from home era
Defining Productivity In The Work-From-Home Era

As the Pandemic wanes (more or less), the debate about going back to the office vs. continuing to work from home remains in full swing. Central to this debate is the question about whether it is, in fact, better for companies for people to work from an office than it is to work remotely. The answers to this can be wildly divergent, from those who believe that productive work can only be done in an office, where resources can be consolidated, and people can meet, face to face, with one another for collaboration, to those who see work better done when the workers essentially control their own schedules and workflows.

To that end, one of the fundamental questions in this debate is what, exactly it means to be productive. Productivity has been an integral part of the work environment for more than a hundred and twenty years, yet it is also something that is both poorly defined and quite frequently massively misused. To understand this, you have to go back to Frederick Taylor, who first defined many of the principles of the modern work environment around the turn of the Twentieth Century.

How Frederick Taylor Invented Productivity

 

Frederick Taylor, Genius or Con Man?

Taylor was an odd character to begin with. He was born to a fairly wealthy family and managed to get admitted to Harvard Law School, but due to deteriorating eyesight, he decided to go into mechanical engineering instead, working first as an apprentice and later master mechanic at Midvale Steel Works in Pennsylvania, eventually marrying the daughter of the president of the company while working his way up from the shop floor to sales and eventually to management.

From there, Frederick Taylor began putting together his own observations about how inefficient the production lines were, and how there needed to be more disciplined in measuring productivity, which at the time could be measured as the number of components that a person could produce in a given period of time. In 1911, he wrote a monograph on the subject called The Principles of Scientific Management, which generalized these observations from the steel mill to all companies.

Taylor’s work quickly found favor in companies throughout the United States, where his advocacy of business analytics, precision time-keeping, and performance reviews seemed to resonate especially well in the emerging industrial centers of the country. At the same time, the data that he gathered was often highly suspect – for instance, he would frequently use the measurements of the fastest or strongest workers as the baseline for all of his measurements, then would recommend that owners dock the pay of workers that couldn’t reach these levels. He also mastered the art of business consulting, pioneering many of the techniques that such consultants would use to sell themselves into companies decades later.

Productivity was one of his inventions as well, and it eventually became the touchstone of corporations globally – a worker’s output could be measured by his or her productivity: the number of goods they produced in a given period of time. However, even this measure was somewhat deceptive, first, because this measure was at least in part determined by the automation inherent within an assembly line, and in part assumed that production of widgets was the only meaningful measurement in a society that was even then shifting from agricultural to industrial, while other factors such as quality or complexity of the products, physical or mental states of the workers, or even stability of the production line were ignored entirely.


Do Not Fold, Spindle or Mutilate

Productivity In The Computer Age

Automation actually made a hash of productivity early on. An early bottling operation for beer usually involved manually filling a bottle, then stoppering it. A skilled worker could get perhaps a dozen such bottles out a minute and could sustain that for an hour or so before needing to take a break. By the 1950s, automation had improved to the extent that a machine could fill and stopper 10,000 bottles a minute, a thousandfold increase in productivity. The bottler at that point was no longer performing the manual labor, but simply ensuring that the machine didn’t break down, that the empty bottles were positioned in their lattice, and that the filled ones were boxed and ready for shipment. Timing the bottler for filling bottles no longer made any sense but still, the metric persisted.

Not surprisingly, corporations quickly adopted Taylorism for their own internal processes. People became measured by how many insurance claims they could process, despite the fact that an insurance claim required a decision, which meant understanding the complexity of a problem. Getting more insurance claims processed may have made the business run faster, but it did so at the cost of making poorer decisions. It would take the rise of computer automation and the dubious benefits of artificial specialized intelligence to get to the point where semi-reasonable decisions could be made far faster, though the jury is still out as to whether the AI is in fact any better at making the decisions than humans.

Similar productivity issues arise with intellectual property. In the Tayloresque world, Ernest Hemingway was terribly unproductive. He only wrote about twenty books over his forty years of being a professional writer, or one book every two years. Today, he could probably write a book a year, simply because revising manuscripts is far easier with a word processor than a typewriter, but the time-consuming part of writing a book – actually figuring out what words go into making up that book – will take up just as long.

Even in the world of process engineering, in most cases what computers have done is to reduce the number of separate people handling different parts of the process, often down to one. Forty years ago, putting together a slide presentation was a fairly massive undertaking that required graphic artists, designers, photographers, copywriters, typographers, printers, and so forth weeks. Today, a ten-year-old kid can put together a Powerpoint deck that would have been impossible for anyone to produce earlier without a half-million-dollar budget.

We are getting closer to that number being zero: fill in some parameters, select a theme, push a button, and *blam* your presentation is done. This means, of course, that there are far more presentations out there than anyone would ever be able to consume. and that the bar for creating good, eye-catching, memorable presentations becomes far, far higher. It also means that Tayloresque measurements of productivity very quickly become meaningless when measured in presentations completed per week.

That’s the side usually left out in talking about productivity. Productivity is a measure of efficiency and efficiency is a form of optimization. Optimizations reach a point of diminishing returns, where more effort results in less meaningful gains. That’s a big part of the reason that productivity took such a nosedive after the turn of the twenty-first century. Even with significantly faster computers and algorithms, the reality was that the processes that could be optimized had already been so tweaked that the biggest factor in performance gains came right back down to the humans, which hadn’t really been changed all that much in the last century.

A forum that I follow posed the question about whether it was better for one’s career to work in the office or work from home. A person made the comment that people who work remotely may get passed over for promotion compared to someone who comes in early and stays late because the managers don’t see how hard working the remote worker is compared to the office worker. This is a valid concern, but it brings back a memory of when I started working a few decades ago and found myself working ten and eleven-hour days at the office for weeks on end trying to hit a critical deadline. Eventually, I was stumbling in exhausted, and the quality of my work diminished dramatically. I was essentially also giving my employer three additional hours a day at no cost, though after a while, they were getting what they paid for.

Knowledge work, which I and a growing number of people do, involves creating intellectual property. Typically, this involves identifying structure, building, testing, and integrating virtual components. It is easy to tell at a glance how productive I am, both in terms of ascertaining quantity (look at the software listings or article page) as well as quality (see if it correctly passes a build process or read the process). This is true for most activities performed today. If there are questions, I can be reached by email or phone or SMS or Slack or Teams or Zoom or any of a dozen other ways. With most DevOps and continuous integration processes, a manager can look at a dashboard and literally see what I have worked on within the last few minutes.

In other words, regardless of whether you are working remotely vs. working in the office, there are ample tools that a manager has to be able to ascertain whether a worker is on track to accomplish what they have pledged to accomplish. This is an example of goal-oriented management, and quite frankly it is exactly how most successful businesses should be operating today.


The Paycheck Was Never Meant To Measure Time

The Fallacy of the Paycheck and the Time Clock

So let’s talk a little bit about things from the perspective of being a manager. If you have never done it before, managing a remote workforce is scary. Most management training historically has focused on people skills – reading body language, setting boundaries, identifying slackers, dealing with personal crises, and most importantly, keeping the project that you are managing moving forward. Much of it is synthesizing information from others into a clean report, typically by asking people what they are working on, and some of it is delegating tasks and responsibilities. In this kind of world, there is a clear hierarchy, and you generally can account for the fact that your employees are not stealing time or resources from you because you watch them.

I’ll address most of this below, but I want to focus on the last, italicized statement first because it gets into what is so wrong about contemporary corporate culture. One place where Tayloresque thinking embedded itself most deeply into the cultural fabric of companies is the notion that you are paying your employees for their time. This assumption is almost never questioned. It should be.

Until the start of the middle of the Industrial Age, people typically were paid monthly or fortnightly if they were the employee of a member of the nobility or gentry, or produced and sold their goods if they were craftsmen or farmers, or were budgeted an account if they were a senior member of the church. Often times such payment partially took the form of room and board (or food) or similar services in exchange. Timekeeping seldom entered it – you worked when there was work to be done and rested when the opportunity arose.

Industrialization brought with it more precise clocks and timekeeping, and you were paid for the time that you worked, but because of the sheer number of workers involved, this also required better sets of accounting books and more regular disbursement of funds for payment. It was Taylor that quantized this down to the hour, however, with the natural assumption that you were being paid not per day of work but for ten hours of work a day. This was also when the term work ethic seemed to gain currency, the idea being that a good worker worked continuously, never complained, never asked for too much, and bad workers were lazy and would steal both resources and time from employers if they could get away with it.

In reality, most work is not continuous in nature but can be broken down into individual asynchronous tasks of activity within a queue. It can be made continuous if the queue is left unattended too long or if the time to complete a task increases faster than the rate at which tasks are added to the queue. Office work, from the 1930s to the 1970s, usually involved a staff of workers (mainly female) who worked in pools to process applications, invoices, correspondence, or other content – when a pool worker was done, she would be assigned a new project to complete. This queue and pool arrangement basically kept everyone busy, further cementing the idea that an employer was actually paying for the employee’s time, especially since there was usually enough work to fill the available hours of the day.

That balance shifted in the 1970s and 80s as the impact of automation began to hit corporations hard. The secretarial pool had all but disappeared by 1990 with the advent of computers and networking. While productivity shot up – fewer people were doing much more “work” in the sense that automation enabled far more processing – people began to find themselves with less and less to do and made it possible for companies to eliminate or consolidate existing jobs. A new generation picked up programming and related skills, and the number of companies exploded in the 1990s as entrepreneurs looked for new niches to automate as the barrier to entry for new companies dropped dramatically.


By focusing on demonstrable goals rather than “seat-time”, organizations can become more data-oriented.

The WFA Revolution Depends Upon Goals and Metrics

Since 2000, there have been three key events that have dramatically changed the landscape for work. The first was the rise of mobile computing, which has made it possible for people to work anywhere there is a network signal. The second was the consolidation of cloud computing, moving away from the requirement that resources need to be on the premises. Finally, the pandemic stress-tested the idea of work virtualization in a way that nothing else could have and likely forcing the social adoption of remote work about a decade earlier than it would have taken otherwise.

Productivity through automation has now reached a stage where it is possible to

  1. get reliable metrics based upon work completed towards specific goals, regardless of time specifically spent,
  2. automate those tasks which do not in fact require more than minimal human intervention,
  3. get access to resources needed to accomplish specific tasks, regardless of where those tasks are accomplished
  4. provide a superior environment for meeting virtually across multiple time zones, creating both a video and transcript artifact of such meetings,
  5. provide tools for collaborating in the same way, either synchronously or asynchronously (addressing the water cooler problem)
  6. ensure that information remains secure
  7. provide a set of eyeballs on evolving situations anywhere in the world at any time

Put another way – remote force productivity is not the issue here.

Most people are far more productive than they have ever been, to the extent that it is becoming harder and harder to fill a forty-hour week most of the time. I’d argue that when an employer is paying an employee, what they should be doing is spreading out a year-long payment into twenty-six chunks, paying not for the time spent but the availability of the expertise. That the workweek is twenty hours one week and fifty hours next is irrelevant – you are paying a salary, and the actual number of hours worked is far less important than whether in fact the work is being done consistently and to a sufficiently high standard. This was true before the pandemic, and if anything it is more true today.

Businesses began in the 1970s to start pushing labor laws so that companies could classify part-time workers as hourly – this meant that, rather than having a minimum guaranteed total annual income, such workers were only paid for their time on-premises. By doing so, such workers (who were also usually paid at or even below a minimum wage), would typically be the ones to bear the brunt if a business had a slow week, but were also typically responsible for their own healthcare and were ineligible for other benefits. In this way, even if on paper they were making $30,000 a year, in reality, such workers’ actual income was likely half that, even before taxes. By 1980, labor laws had effectively institutionalized legalized poverty.

After the pandemic, companies discovered, much to their chagrin, that their rapid shedding of jobs in 2020 came back to bite them hard in 2021. Once people have a job, they develop a certain degree of inertia in looking for a new job, and often times may refuse to look for other work simply because switching jobs is always somewhat traumatic. This also tends to depress wage growth in companies, because most companies will only pay a person more (and even then only to a specific minimum) if they also take on more responsibility (in other words, new hires generally make more than existing workers for the same positions).

At the bottom of the pandemic bust, more than 25 million people were thrown out of work, deeper even than during the Great Depression. The rebound was fairly strong, however. It meant that suddenly every company that had jettisoned workers was now trying to rehire new workers all at once. For the first time in a generation, labor had newfound bargaining strength. This also coincided with a long-overdue generational retirement of the Boomers and the subsequent falloff in the number of GenXers, which overall is about 35% smaller than the previous generation. Demographic trends hint that the labor market is going to favor employees over employers for at least the next decade.

Given all that, it’s time to rethink productivity in the Work From Home era. The first part of this is to understand that work has become asynchronous, and ironically, it’s healthier that way. There will be periods of time when employees will be idle, and others where employees will be very busy. Most small businesses implicitly understand this – restaurants (and indeed, most service economy jobs) have slack times and busy times. Perhaps it is time for “hourly” workers to go back to being paid salaries again. This way, if someone is not needed on-site at a particular, sending them home doesn’t become an economic burden for them. On the flip-side, that also puts the onus on the worker that, should things get busy again, they remain reachable in one of any number of ways.

Once you move into the knowledge economy, the avenues for workers become more open. Salary holds once more here, but so too does the notion of being available at certain times. I’ve actually seen an uptick in the number of startup companies that utilize Slack as a way of managing workflow, even in service sector work, as well as indicating when people need to be in the office versus simply need to be working on projects.

I am also seeing the emergence of a 3-2-2 week: three days that are specifically set aside for meetings, either onsite or over telepresence channels such as Zoom or Teams, two days where people may be on call but generally don’t have to meet and can focus on getting the most productivity without meetings interrupting their concentration, and two days that are considered “the weekend”. When workloads are light (such as during summers or winter holidays), this can translate into “light” vacations, where people are just putting in a couple of hours of work a day during their “Fridays” and are otherwise able to control their schedules. When workloads are heavy (crunch time) the bleed even into the weekend CAN happen, so long as it’s not done for an extended period of time.

Asynchronous, goal-oriented, and demonstrable project planning also becomes more critical in the Work From Home era. This, ironically, means that “scrum”-oriented practices should be deprecated in favor of being able to attach work products (in progress or completed) to workflows – whether that’s updating a Git repository, publishing a blog, updating a reference standard, or designing media or programmatic components. Continuous integration is key here – use DevOps processes to ensure that code and resources are representative of the current state of the project and that provide a tracking log of what has been done by each member of a given team.


Micromanagement, abusive behavior, and political games – is it any wonder people are staying away from the office?

Management: Solution Or Problem?

For production teams, this should be old-hat, but it’s fairly incumbent that management works in the same way, and ironically, this is where the greatest resistance is likely going to come from. Traditional management has typically been more face-to-face in interaction (in part because senior management has also traditionally been more sales-oriented). The more senior the position, the more likely that person will need comprehensive real-time reporting, and the more difficult (and important) it is to summarize the results from multiple divisions/departments.

Not surprisingly, this is perhaps the single biggest benefit of a data-focused organization with strong analytics: It makes it easier for managers to see in the aggregate what is happening within an organization. It also makes it easier to see who is being productive, who is needing help, and who, frankly, need to be left behind, which include more than a few of those same managers.

https://www.theatlantic.com/ideas/archive/2021/07/work-from-home-be…

You cannot talk about productivity without also talking about non-productivity. This doesn’t come from people who are genuinely trying but are struggling due to a lack of resources, training, or experience. One thing that many of these same tools can do is to better highlight who those people are without putting them on the spot, and a good manager will then be able to either assign a mentor or make sure they do have the training.

Rather, it’s those workers who have managed to find a niche within the organization where they don’t actually do much that’s productive, but they seem to constantly be busy. Work from home may seem to be ideal here, but if you assume that this also involves goal-oriented metrics, it actually becomes harder to “skate” when working remotely, as there is a requirement for having a demonstrable product at the end of the day.

Finally, one of the biggest productivity problems with WFH/WFA has to do with micromanagement as compensation for being unable to “watch” people at work. This involves (almost physically) tying people to their keyboards or phones, monitoring everything that is done or said, and then using lack of “compliance” as an excuse to penalize workers.

During the worst of the pandemic, stories emerged of companies doing precisely this. Not surprisingly, those companies found themselves struggling to find workers as the economy started to recover, especially since many of these companies had a history of underpaying their workers as well. Offices tend to create bubble effects – people are less likely to think about leaving when they are in a corporate cocoon than when they are working from home, and behavior that might be prevalent within offices – gas-lighting, sexual harassment, bullying, overt racism, bosses not crediting their workers, and so forth – can be seen more readily when working away from the office as being unacceptable than they can when within the bubble.

There are multiple issues involved with WFH/WFA that do come into play, some legitimate. However, making the argument that productivity is the reason that companies want workers to come back to the office is at best specious. While it is likely more work for managers, a hybrid solution where the office essentially becomes a place where workers congregate when they do need to gather (and those times certainly exist) likely is baked into the cake by now especially as the Covid Delta variant continues to rage in the background. It’s time to move beyond Taylorism, and the fallacy of the time clock.

Kurt Cagle is the Community Editor of Data Science Central, a TechTarget property.

Source Prolead brokers usa

curiosity and inquisitive mindset keys to data science and life success
Curiosity and Inquisitive Mindset: Keys to Data Science – and Life – Success

In July 2014, Malaysia Airlines Flight 17 (MH17), a passenger flight from Amsterdam to Kuala Lumpur, was shot down over eastern Ukraine. Over the next four years, Bellingcat – an independent international collective of researchers, investigators, and citizen journalists – combined open source and social media data with an inquisitive and curious mindset to uncover proof of the Russians involvement in the MH17 tragedy. 

Bellingcat made a break in the case when it discovered videos and photos posted online that identified and tracked a Russian Buk TELAR missile system as it made its way through rebel-controlled territory into Ukraine.  Bellingcat identified that the Russian military was involved in the MH17 tragedy years before it was confirmed by European officials (Figure 1).

Figure 1: “How Bellingcat tracked a Russian missile system in Ukraine

Bellingcat identified the location of the convoy by comparing images posted online to satellite imagery. Matching multiple objects in the images, Bellingcat’s team determined the precise location for each image. The team used shadows in the images to determine the approximate time of day for the photo or video. The trail led them to Kursk, Russia and established that the missile launcher that shot down MH17 came from a Russian brigade[1].

I had my own personal inquisitive epiphany when I was doing research for my blog “Creating Assets that Appreciate, Not Depreciate, in Value Thru Cont…”.  I suspected that Tesla must be building a massive simulation environment in which to train the Full Self Driving (FSD) module behind Tesla’s autonomous vehicle plans.  However, I struggled to find details until it occurred to me to analyze Tesla’s job board! There, I uncovered this job posting for a Tesla Autopilot Simulation, Tools Engineer (note: link is no longer active):

“The foundation on which we [Tesla] build these [autonomous vehicle] elements (such as building tools to perform virtual test drives, generate synthetic data set for neural network training) is our simulation environment. We develop photorealistic worlds for our virtual car to drive in, enabling our developers to iterate faster and rely less on real-world testing. We strive for perfect correlation to real-world vehicle behavior and work with Autopilot software engineers to improve both Autopilot and the simulator over time.”

Yes, Tesla needed to build this massive cloud simulation environment where individual learnings from each of the 1M+ Tesla cars could share, reuse, and continuously refine those learnings (Figure 2).

Figure 2: Tesla Simulator Role in Driving Autonomous Vehicle Vision

An inquisitive mind, with a curiosity to explore the unknowns, is what humans do naturally.  I remember being young (once) and taking apart my dad’s radio to see how it worked (I later explained to him that the radio had extra parts when I put it back together).

Curiosity may be our most important human characteristic when compared to AI-powered machines that are continuously-learning and adapting.  Humans can’t learn faster than machines with AI/ML models and their nearly unlimited processing power, unbounded amounts of granular data, and wide range of “learning” mechanisms such as Machine Learning and Deep Learning and Reinforcement Learning and Transfer Learning and Federated Learning and Meta Learning and Active Learning (Figure 3).

Figure 3:  Different types of AI / ML Learning Algorithms

Unfortunately, society goes to great lengths to crush curiosity and an inquisitive mindset in favor of “standardization”.  As a youth, we have standardized classes with standardized curriculums sitting in standardized classrooms with standardized testing. No one is allowed to color outside the lines.  But the curiosity crushing doesn’t stop there because we take jobs in organizations with standardized org charts where employees sit like prairie dogs within standardized offices with their standardized job descriptions, standardized performance reviews, and standardized pay grades.

I fear that this “standardization” will lead to a “lowest common denominator” human development.  And to address the range and depth of problems that we face as a society, we MUST go beyond “standardization”.

The Harvard Business Review article “The Business Case for Curiosity” discusses the importance of nurturing curiosity and that inquisitive mindset.

“Most of the breakthrough discoveries and remarkable inventions throughout history, from flints for starting a fire to self-driving cars, have something in common: They are the result of curiosity. Curiosity is the impulse to seek new information and experiences and explore novel possibilities in search of new solutions to everyday problems and challenges.”

The article presents the five-dimensions of curiosity:

  • Deprivation sensitivity is recognizing a gap in knowledge the filling of which offers relief
  • Joyous exploration (my favorite dimension) is being consumed with wonder about the fascinating features of the world
  • Social curiosity is talking, listening, and observing others to learn what they are thinking and doing
  • Stress tolerance is a willingness to accept and harness the anxiety associated with novelty
  • Thrill seeking is being willing to take physical, social, and financial risks to acquire varied, complex, and intense experiences

The article is a good read if you are seeking ways to get more innovative thinking out of your teams (and yourself).

Why am I talking so much about curiosity and developing an inquisitive mindset? Because great data scientists don’t just think outside the box, they seek t….  Great data scientists are constantly seeking to explore, discover, test, and create or blend new approaches with create new variables that “might” be better predictors of performance (Figure 4).

Figure 4: Data Science Collaborative Engagement Process

Might” may be the data scientists (and humans) most powerful enabler.  “Might” grants us the license to explore, to follow our curiosity, to try different things, to fail and learn what doesn’t work, to try again, and eventually to come up with a better approach for solving wicked hard problems.

What can we do as leaders to encourage and nurture curiosity and that inquisitive mindset? Curiosity must be “allowed” to exist for curiosity to flourish.  And that’s where true leadership comes into play. Although many leaders say they value curiosity and an inquisitive mind, in fact many seek to stifle curiosity because curiosity thrives by challenging the status quo.  Many “leaders” treat curiosity as the enemy, as some sort of disease.

I believe Design Thinking is key to nurturing curiosity and an inquisitive mindset.  Design Thinking provides the mentality to accept that all ideas are worthy of consideration, that the best ideas won’t likely come from senior management, and that “diverge to converge” may be our most powerful ideation concept (Figure 5).

Figure 5: Blend Design Thinking with Data Science to Nurture Curiosity

Our society is facing wicked hard problems where “standardized” approaches just won’t work (and in fact, many of these “standardized” approaches have gotten us into this predicament).  Unleashing our natural curiosity and inquisitive minds is critical to addressing these problems.

In the preface of my new book “The Economics of Data, Analytics, and Digital Transformation”, I state the following:

“The COVID-19 pandemic has been exacerbated by incomplete and opaque data supporting suspect analytics, economic turbulence despite trillions of dollars spent in overly generalized financial interventions, and civil unrest from years of ineffective blanket policy decisions. The ability to uncover and leverage the nuances in data to make more effective and informed policy, operational, and economic decisions is more important than ever. However, improving decisions in a world of constant change will only happen if we create a culture of continuous exploring, learning, and adapting.”

The same old “standardized” business and operational processes won’t help us to create a culture of continuous exploring, learning, and adapting necessary for us to make the necessary informed policy, operational, and economic decisions to help us address these wicked hard problems.  We must embrace curiosity and an inquisitive mindset to explore, try, test, fail, try again, and try again until we discover / blend / create ideas that “might” lead to better business, operational, environmental, and society outcomes.

[1] “How Bellingcat tracked a Russian missile system in Ukraine” https://www.cbsnews.com/news/how-bellingcat-tracked-a-russian-missi…

Source Prolead brokers usa

geospatial modeling the future of pandemic analysis
Geospatial Modeling: The Future of Pandemic Analysis

 

  • Geospatial modeling may be the future of pandemic control.
  • Recent studies analyzed local data and found hidden trends.
  • Border control isn’t enough to stop the spread of Covid-19.
  • Where you live determines your risk for the disease.

Significant amounts of data have been collected, analyzed, and reported globally since the start of the Covid-19 pandemic, leading to a better understanding of how the disease spreads. Much of this data has been analyzed with geospatial modeling, which finds patterns in data that includes a geospatial (map) component. The modeling technique uses Geographic Information System (GIS), originally developed in the 1960s to store, collate, and analyze data about land usage [1]. Since its inception, GIS has since been used in an ever-increasing range of applications including modeling of human behavior in a geospatial context. More recently, the tool has been applied to Covid-19 data to analyze how the disease spreads globally (across national borders) and locally (within borders).

The bulk of Covid-19 geospatial modeling research has focused on global concerns like international travel, the effectiveness of border closures, and the spread of disease in a specific country taken as a whole.  Recently, studies have been applied at the local level—in cities, neighborhoods or specific rural areas. These local studies have revealed significant disparities in both Covid-19 testing and cases between different types of neighborhoods within cities; The results indicate that national border controls are not enough; the pandemic must also be tackled at a local level. Additionally, analysis has revealed that conclusions obtained from one country’s data cannot necessarily be applied to another country because of differences in social structures. 

The Spread of Covid-19 Isn’t Random

One ecological research paper [2], explored spatial inequities in COVID-19 confirmed cases, positivity, mortality, and testing in three U.S. cities for the first 6 months of the pandemic. The research concluded that socially vulnerable neighborhoods—those suffering from residential segregation and with a history of systematic disinvestment—had more confirmed cases, higher test positivity and mortality rates, and lower testing rates compared to less vulnerable neighborhoods.

A similar Canadian-based study [3] revealed that “Social injustice, infrastructure, and neighborhood cohesion” were characteristics of increasing incidence and spread COVID-19.  Maps of locales showed that hotspots were more likely to be found in disadvantaged neighborhoods:

The study concluded that cases are not randomly spread but spatially dependent. In other words, your odds of contracting and dying of the disease is higher if you live in a socio-economically disadvantaged area. The study authors urge that is that a tailor-made monitoring and prevention strategy—geared towards specific neighborhood issues—must be applied to COVID-19 mitigation policies to guarantee control of the disease.

Covid-19 Data Can’t be Generalized

Up until fairly recently, much of the pandemic modeling data came from China. However, while Covid-19 data from one country (in this case, China) may offer important insights about the spread of disease, it’s not always the case that those results will be applicable to other countries. This is likely because social and urban structures in China may be quite different from those in Europe and other countries.

One study using data from Catalonia, Spain [4], showed differing results when comparing global spatial autocorrelation between data from China and Catalonia, Spain.  Spatial autocorrelation describes the degree to which spatial location values are similar to each other The study found that the results from Catalonia showed no spatial autocorrelation with regards to Covid-19 statistics (with one minor exception), while studies using Chinese data showed strong spatial autocorrelation levels. In addition to differences between social constructs, one reason for the disparity may be that the Chinese data was gathered from a huge geographical area, so may have suffered from scale effect.

The Catalonia study concluded that there may be a spatial random pattern of positive cases. However, the authors noted a few anomalies that indicated the possibility of hidden local spatial autocorrelation for specific areas.

All three studies concluded that patterns of Covid-19 spread warrants measures to contain the virus on a local level (like city or town) as well as a global level. In other words, border controls are not enough to contain the virus unless resources target regional hotspots as well. 

References

[1] Overview of GIS History

[2]A first insight about spatial dimension of COVID-19: analysis at mu…

[3] Spatial Inequities in COVID-19 Testing, Positivity, Confirmed Cases… 

[4]  COVID-19 in Toronto: A Spatial Exploratory Analysis

Source Prolead brokers usa

taking a cloud native approach to software development microservices
Taking a Cloud-Native Approach to Software Development & Microservices

Taking a Cloud-Native Approach to Software Development & Microservices

Time for hardware and on-premises infrastructure has disappeared. With the emergence of cloud computing, most of the businesses, small or big, have already adopted or are transitioning to cloud native architecture to keep innovating in a fast and efficient manner. This approach leverages the benefits of cloud by using open-source software stack to develop and deploy easily scalable and resilient applications.

Through this blog, we will understand a cloud native approach why it matters in the world of software development.

Cloud Native Defined

Cloud native is an approach to developing, running, and optimizing applications by using advantages of cloud computing delivery model. This method allows developers to fully use cloud resources and integrate new technologies like Kubernetes, DevOps, microservices for rapid development and deployment.

In simple terms, cloud native approach is all about creating applications without worrying about the servers and underlying infrastructure. And this flexibility is one of the major advantages of using cloud native approach over monolithic architecture.

In fact, IDC research states that 90% of new enterprises will adopt cloud native approach by 2022.

This is clear that the cloud native approach will completely take over legacy systems in the near future. The on-premise physical server that doesn’t integrate with new systems and hinder innovations will be replaced by distributed servers.

Related: Which one to choose: Cloud Native vs Traditional Development

Cloud Native Applications

Cloud native applications are created as a composition of small, independent, and loosely coupled micro services. They are built to deliver significant business value – rapidly scale and incorporate feedback for continuous improvement. These microservices are packaged in Docker containers, containers are organized in Kubernetes and managed & deployed using DevOps workflows.

Docker containerization is a set of platform-as-a-service that packs all the software you need to run in one executable package known as container. These containers are OS independent and run in a virtualized environment. Kubernetes, on the other hand, is an open-source container orchestration service which is responsible for the management and scaling of containers. DevOps workflows enable software developers to release and update apps faster using agile processes and new automation tools. 

“The Cloud Native Computing Foundation (CNCF) found that containers popularity has picked up the pace and increased to 92% in production environment. In the current times, 91% enterprises have been leveraging Kubernetes, 83% of which is solely used in production process.”

Source: CNCF Survey Report

This simply means that organizations are increasingly adopting containerization or moving new workloads as they become more comfortable with containerization service.

Related: Legacy Application Modernization Doesn’t Mean Starting Over

Why Cloud Native Approach Matters in Software Development

Cloud native computing saves cloud resources and helps developers to build the right architecture while using best-of-the-breed technologies and tools. The architecture utilizes cloud services including EC2, S3, Lambda from AWS, etc. to support dynamic and agile development. Using this approach, a software application is divided into small microservices that centered around APIs for establishing connections. Buoyed by automatic capabilities, the architecture is isolated from the server and OS dependencies and managed through agile DevOps processes.  

Here are some key reasons why cloud native is a modern approach to software development.        

Flexibility and Scalability

The loosely coupled structure of microservices enables software developers to create applications on the cloud by choosing the best tools for the job depending on the specific business requirements. Put simply, software architects can choose appropriate data storage and programming languages that make most sense for the specific microservice.

Developers are no longer restricted to work on one technology. As advancements in technologies keep happening, software professionals can take advantage of them without worrying about how changes in one microservice affect the application as a whole. They can easily scale microservices independently, remove old ones, and add new ones, keeping the entire application code intact. They can make the non-functioning microservice temporarily unavailable while launching the latest version of the application. 

 

API based Communication

Cloud-native microservices rely on representational state transfer (REST) APIs for interaction and collaboration among microservices. APIs are a set of tools and protocols that are responsible for communication between applications and services. Such protocol level designs make sure that there is no risk associated with direct linking, shared memory models, or direct database retrievals to applications. For example, binary protocols are the perfect fit for communication between internal services. Other examples of modern open-source protocols include gRPC, Protobuff, and Thrift.

And this perfectly aligns with what Icreon has offered to ASTM, world’s largest standards development organization, by integrating APIs to create a well-designed information system that connects all ASTM datapoints together in real-time and ensures right data gets to the right place. The platform is built atop Azure Cloud that facilitates the organization to scale up the system whenever the requirement shows up.

DevOps and Agile Frameworks

Microservices are best fit for agile frameworks that work on DevOps and CI (continuous integration) and CD (continuous delivery) models. Unlike monolithic approach, DevOps workflows create an environment wherein software development is an ongoing cycle. In cloud native development, multiple cross-functional teams follow these core principles of DevOps to simultaneously work on different features/modules of an application. CI/CD approach also allows developers to embrace feedback or changing requirements (ongoing automation) throughout the development lifecycle from testing to delivery and deployment.

Related: Making the Case for Agile Software Development in 2021

Final Thoughts

Cloud native architecture brings software development to the next level, equipping developers with new tools and technologies to deliver more quality products and solutions faster. In addition, the development using appropriate methodologies and tools makes significant savings in computing resources in the clouds. 

The transition from monolithic approach to cloud native development enables organizations to stay ahead of the competition. Containers make it easier to distribute an application amongst team members, run them in different environments and deploy the same container in production. Microservices have introduced a new way to structure your system, improve encapsulation, and create maintainable small units or services that can quickly adapt to new requirements. 

About the Author  

Paul Miser is the Chief Strategy Officer of Icreon, a digital solutions agency and Acceleration Studio. He is also the author of Digital Transformation: The Infinite Loop – Building Experience Brands for the Journey Economy. 

Source Prolead brokers usa

how facebook and google are pushing mobile ux to its limits
How Facebook and Google are pushing Mobile UX to its limits

In an endeavor to ensure lesser loading times for news and media web pages across the mobile web, Facebook and Google came up with Instant Articles and Accelerated Mobile Pages (AMP).

For Facebook, the initiative was focused on keeping users from leaving the social-media channel rather than referring traffic to online publishers. For Google, the project focused on building lightweight web pages by using an open-source AMP HTML code framework. In both cases, the focus was on radically improving the mobile user experience.

How did ‘Facebook’s Instant Articles’ Improve its Mobile UX?

With Instant Articles, publishers can now host their stories and posts on the Facebook servers, which proved beneficial in loading linked articles ten times quicker than a separate web app or page. With various interactive tools like auto-play videos, maps, zooming, comments, audio captions, analytics tools, and others, the project lends handy tools for publishers and a great mobile user experience.

With speed being the main selling point, Instant Articles also ensured visual consistency and readability by having good visual design standards and lesser visual clutter.

Publications that already signed up for Instant Articles were National Geographic, The New York Times, BBC News, Fox Sports, The Washington Post, The Onion, The Huffington Post, The Verge, The Atlantic, Business Insider, TIME, Hollywood Reporter, and others.

How will ‘Google’s Accelerated Mobile Pages’ Enhance Mobile UX?

As per Google, with AMP HTML, the performance of mobile web has ‘drastically improved’. This was made possible by allowing website owners to build websites with lighter-weight web pages and employing caching techniques of pre-fetching and storing web pages to pre-load it even before the user clicks on it. The result – web pages that earlier took around 3 seconds to load will now take milliseconds to show.

Accelerated Mobile Pages could load much faster when users search for news on Google by getting rid of JavaScript and simplifying the HTML code architecturally.

Many big tech companies and online portals are already on board, including Pinterest, LinkedIn, Twitter, The Washington Post, The Guardian, The New York Times, and The Wall Street Journal.

So, is the Mobile UX Pushed to its Limits?

Thanks to internet conditioning, we want everything quickly. When visiting a news or media website, pages might take time to load. While the text might show, building up might take longer because of ads and images. This painful experience of slow page load times is now being taken care of by Facebook’s Instant Articles and Google’s Accelerated Mobile Pages. These aim at driving more direct traffic for publishers by ensuring the user experience is improved and a foundation is built for creators to deliver their content.

With the speed of the mobile web, it seems a win-win scenario for all parties – first and foremost, the user; the publisher; and the platforms supporting the content. In the long run, the strategy of picking content from a few media houses and serving these quickly to the users can lead to content agenda-setting. Only time will tell how this media distribution and consumption model on the mobile web evolves.

What makes a great UX Design?

UX designing is a dynamic and complex process- to offer the most exclusive experience to your customers, attend to what they have to say. It would be best if you focus your priority on delivering user-centric designs.

What would interest your user base? Nobody other than your customers would be able to answer that better. A user-centric design will help designers cater to the needs of their users through the app.

To start with, research and reference similar mobile apps; this doesn’t mean copying other mobile apps, as what works best for one interface doesn’t necessarily go along with others.

Instead, learn and analyze from your competition- why specific trends work and others don’t. Combine your research with what aligns best for your brand and personalize the user experience, making your UX stronger in the long run. The most general way of validating your product is by testing it with your target audience. Generate a minimum viable product (MVP) at the beginning to settle if your idea’s well-accepted by its core users. If you question how much it will require to produce a mobile app, you can check some online calculators that can aid you in getting some rough estimation.

How can you improve UX for your mobile app?

Every app and its purpose are different; thus, the advancements you want to provide your customers can vary. However, the basic essentialities of delivering a seamless, fast and personalized experience remain the same.

Personalized UX 

Personalization provides unique UX for every app or website on the web. When you align user experience and preferences, users are more likely to stay connected with your app. Personalization becomes more critical when designing eCommerce UX pages. Displaying pop-up messages with personalized names reminds customers of half-abandoned transactions- this adds a customized touch. However, it’s also important to display only relevant content to avoid any counter-effect.

Proper features and speed 

A Google research found that if the page loading time takes more than 5 seconds, the possibility of bounce increases by 90%! Optimizing images and reducing plugins are some of the ways to speed up the mobile loading page. Thus they should remain in your focus to avoid any speed delay issue. The app’s functionality must help users finish their tasks as it’s the first motivation towards downloading any app. Prioritize core features vital for achieving the tasks and offer only those relevant features which encourage even more users to stay connected with your app.

Gesturization Tune 

Gesturization includes users’ actions while navigating and interacting with your app, like tapping, scrolling, and swiping through the screen. Knowing your users’ behavior is crucial for optimizing the gestures according to them. Gestures allow users to engage with the technology through the sense of touch; these famous gestures are tap, double-tap, drag, swipe, and press. Designers should keep these touch- gestures out of hard-reach areas for easy navigation and provide enough tapping space. 

Summing Up

The big fish are emergently experimenting with their UX designs to hold their customer base and provide a seamless experience. Its time designers and leaders focus immediately on what they can learn and experiment on their own for their consumers. The above tips highlight the recommendations for visually pleasing design and reliability. UX design itself should be subtle, simple, and decluttered – the users must feel the pronounced navigation flow.

Source Prolead brokers usa

what skills would be needed for telecoms professionals in the 5g world
What skills would be needed for Telecoms professionals in the 5G world?

Background

We all have a mobile phone and, in this sense, we are all consumers of mobile technology. Over the past four decades, mobile networks have evolved in parallel to the internet. However, the next evolution of the mobile network (5G) has much to offer to the enterprise. Yes, consumers will benefit because 5G is more than 100 times faster than the prevailing network technology (4G / LTE). However, 5G also represents a fundamental change in architecture of the mobile network.

Over the last decade, telecoms has been evolving ‘as a service’. For example, the early efforts of the GSMA to create One API based on IMS (IP Multimedia Subsystem) allows access to payment, location, SMS etc, However, these were niche and experimental.

Now with 5G technologies, the capabilities of mobile networks and internet are converging to a much greater extent. This technical convergence has the potential to create massive opportunities for both enterprises and telecoms. COVID will only accelerate the need to deploy innovative solutions via 5G

Implications for new services

What does this mean for new services?

New services would be network aware. Currently, the service does not depend on the capabilities of the network. But 5G has some unique capabilities that could be leveraged by applications

Specifically, it would be possible to create low latency, high-bandwidth applications running on edge devices with high sensor density. You could also deploy live video analytics, which depends on low latency.

Typically, you would deploy new 5G based services as a partnership between an enterprise, a network operator, and a systems integrator. Hence, depending on the partnership between the telecom operator and the cloud provider, new services can be developed using existing cloud APIs Cloud APIs (ex: AWS, Azure, GCP). Mobile Edge Computing would be a key touchpoint for deployment of new services, especially Edge / IoT services.

Some applications like self-driving cars are still away. But many enterprise applications could be re-imagined through a combination of 5G, MEC, AI, IoT, robotics, AR, autonomous logistics systems etc

Implications for skills

But the biggest impact will be for skills in Telecoms. The integration with the enterprise world creates the need for a different type of application. Traditional telecoms services are B2C and depend on revenue measures like ARPU. With complex applications spanning the telecoms and the Cloud, a new skill set is required for telecoms which could include

  • AI
  • Edge / IoT
  • Cloud
  • Cyber Security
  • Media
  • Augmented reality and virtual reality
  • Creating autonomous systems
  • End to End services
  • Digital transformation
  • Robotics
  • Project management / Product management – execution

Most importantly, we need the imagination to rethink and retransform services with new capabilities enabled by 5G.

Image source GSMA

Source Prolead brokers usa

when to adjust alpha during multiple testing
When to Adjust Alpha During Multiple Testing

In this new paper (Rubin, 2021), I consider when researchers should adjust their alpha level (significance threshold) during multiple testing and multiple comparisons. I consider three types of multiple testing (disjunction, conjunction, and individual), and I argue that an alpha adjustment is only required for one of these three types.

I argue that an alpha adjustment is not necessary when researchers undertake a single test of an individual null hypothesis, even when many such tests are conducted within the same study.

For example, in the jelly beans study below, it’s perfectly acceptable to claim that there’s “a link between green jelly beans and acne” using an unadjusted alpha level of .05 given that this claim is based on a single test of the hypothesis that green jelly beans cause acne rather than multiple tests of this hypothesis.

For a list of quotes from others that are consistent with my position on individual testing, please see Appendix B here.

To be clear, I’m not saying that an alpha adjustment is never necessary. It is necessary when at least one significant result would be sufficient to support a joint hypothesis that’s composed of several constituent hypotheses that each undergo testing (i.e., disjunction testing). For example, an alpha adjustment would be necessary to conclude that “jelly beans of one or more colours cause acne” because, in this case, a single significant result for at least one of the 20 colours of jelly beans would be sufficient to support this claim, and so a familywise error rate is relevant.

I also argue against the automatic (mindless) use of what I call studywise error rates — the familywise error rate that is associated with all of the hypotheses that are tested in a study. I argue that researchers should only be interested in studywise error rates if they are interested in testing the associated joint studywise hypotheses, and researchers are not usually interested in testing studywise hypotheses because they rarely have any theoretical relevance. As I explain in my paper, “in many cases, the joint studywise hypothesis has no relevance to researchers’ specific research questions, because its constituent hypotheses refer to comparisons and variables that have no theoretical or practical basis for joint consideration.”

Sometimes it doesn’t make sense to combine different hypotheses as part of the same family!

For example, imagine that a researcher conducts a study in which they test gender, age, and nationality differences in alcohol use. Do they need to adjust their alpha level to account for their multiple testing? I argue “no” unless they want to test a studywise hypothesis that, for example: “Either (a) men drink more than women, (b) young people drink more than older people, or (c) the English drink more than Italians.” If the researcher does not want to test this potentially atheoretical joint hypothesis, then they should not be interested in controlling the associated familywise error rate, and instead they should consider each individual hypothesis separately. As I explain in my paper, “researchers should not be concerned about erroneous answers to questions that they are not asking.”

Source Prolead brokers usa

the evolving role of a cdo in a data driven market
The Evolving Role of a CDO in a Data-Driven Market

As a result of the recent pandemic, there has been a surge in demand for data to help safeguard businesses from future uncertainties. With an increasing number of organizations fuelling solutions and driving innovation with data, there is a growing concern with the way that data is accessed, used, and protected. In fact, there was a 126% increase in total fines from 2019 to 2020 issued as a result of the GDPR. These fines will only continue to increase as privacy regulations evolve and expand. 

However, data governance isn’t just about avoiding fines. It also enables organizations to achieve better data analytics, more informed decisions, and improved operational efficiency. As more organizations begin to realize the value of a sound data governance strategy, the role of Chief Data Officer (CDO) has grown in importance and demand.

We recently had the opportunity to chat with Rupinder Dhillon, CDO at Hudson’s Bay Company. Rupinder has worked in data for over twenty years with expertise in data management, business intelligence, advanced analytics, and AI and machine learning. She has worked across industries spanning financial services, software, telecommunications, and now retail.

We had the opportunity to get her perspective on the state of data in 2021.

Q: Welcome, Rupinder. Thank you for sitting down with us. To get started, can you please explain your main responsibilities as a CDO?

I believe that the CDO role is nuanced to the company that you’re in. The role will differ slightly from organization to organization depending on their data maturity, where they are in their data journey, and what data goals they have set. Traditionally, data is thought of as the exhaust that comes from a system. The role of a CDO includes trying to change this mindset from data as just a by-product to a tool that can be used for exploration and innovation. Data is no longer just about reporting.

My role sits at the crossroads of three key areas:

  1. providing good governance around data;
  2. creating a data-driven culture and driving innovation through the use of data; and
  3. making sure the organization has good visibility to performance and key metrics.

Q: How would you define a data-driven culture?

A data driven organization is one that has data and analytics embedded in the culture and the way people work every day, regardless of their function. In a data-driven culture, data and analytics is not seen as a function that is owned by one team but the entire company.

Q: How have you implemented and adopted a data-driven culture?

I’ve adopted the idea of a data-driven culture by positioning data and technology as an ecosystem. Investing in technology that allows data to be more accessible to people across the organization has been a driver for adoption. I call this an ecosystem because the platform is a place in which teams not only pull data out, but also feed data into the platform.

This is important so that all the great analytics that teams are discovering can be available across an organization. Everyone in a company should have the responsibility to not only export data but feed data into the tech ecosystem. For example, if the Logistics teams are working on an analysis in their space, it’s important that the same data and analytics should be available and shared to the Marketing team as well. 

Q: What are the most pressing challenges facing companies who are implementing data governance strategies?

The challenges that companies face depend greatly on the industry they are operating in. For example, data governance for an insurance or financial company is going to be different than that of the retail world. It must be performed with the same level of scrutiny, but the focus of the strategy will differ. Canadian companies need to be thinking about all of the changes that will be coming in the near future regarding data privacy.

Regulatory changes present the opportunity for organizations to think through the questions of:

  • What do we want the data relationship to be with our customers?
  • How can we design our data strategy from the get-go to not just protect customer data, but also build trust with our consumers?
  • What is the legislation and what does it mean to us?

Q: What do you think are the most important variables to consider when developing a data governance strategy?

Across the board, if you’re a company that is working business to consumer (B2C), there needs to be a focus on how to protect customer data and their privacy. Especially when working with ML and AI to power processes, you need to decide where customer data fits and where it should be anonymized.

The most important lesson I’ve learned about data governance thus far is understanding what governance you need, and when you need it. Think about these questions when developing your data governance strategy:

  • What are the non-negotiables with data governance that you need to build into your strategy on day one?
  • What are the things you’d like to build into your data governance strategy over time?

Building out a data governance program is a process. It’s important to start in small incremental steps, not attempt to boil the ocean. Many data initiatives try to get the attention of corporate leaders; however, they must show business value and impact first. Robust data programs could take 12-24 months or more to realize business value. Most organizations simply cannot wait for this amount of time which results in the program losing traction and fizzling out.

Q: You mentioned the importance of data sharing within an organization. What are the opportunities that you see in implementing data sharing?

There are two parts to data sharing I’d like to address: internal and external data sharing.

I don’t believe that certain teams own certain data. For example, marketing owns customer data, logistics owns supply chain data, and customer service owns call centre data. What I mean by ownership is that those are the only teams who need that information and can use it. Our customer data is equally as important to our merchandise planning folks as it is to those that are running our digital strategy, just as our call centre information is very important to our marketing team.

Organizations still require domain responsibilities, meaning teams carry the responsibility for the data that gets generated through their domain. However, I really buy into data mesh architecture where different parts of the organization are creating data as a product for the rest of the organization to consume and collaborate with.

“Real insights come from the cross-sections of traditional data domains. You miss out on hidden problems and opportunities if you neglect insights that can be found by analyzing trends across departments.” 

When considering the value of external data sharing, I do see opportunities in sharing anonymized information across different companies and different types of organizations (e.g. COVID-19 or demographics data). The COVID-19 pandemic has highlighted the importance of data sharing to better adapt to external environment changes. I have seen some great technologies on the market that make creating an ecosystem of internal and external data sharing more seamless, and most importantly, without needing to share any customer information.  There is a tendency to focus on customer data when considering external data sharing strategies, but there is a lot of value to be gained by exploring the cross-sections of operational or functional types of data to better understand a company’s operations and find opportunities.

Q: What kinds of trends do you expect in the data industry and with data use in the enterprise?

I think there is still a long way to go in Canada around adoption of data and analytics. We’ve made strides forward but there is still a long way to go in order to make data a generally available and usable tool in an organization’s tool box. It is very important to not just democratize data, but democratize analytics capabilities by making them a part of your everyday functions rather than the function of just one team.

With more companies embracing ML and AI, I see a trend towards these technologies being a general purpose tool available across all facets of an organization. I hope to see more Canadian companies embed data and technology wherever possible. This includes driving AI and ML solutions that are actually part of their systems and processes. Not just one-off use cases, but ones that drive and change how an organization functions in their day-to-day.

I’ve seen a lot of digital acceleration and I think we’ll see even more over the next two to five years. The companies that are born in the digital age, like Shopify, are setting the bar for all industries. The organizations who were not born in the digital age, but are prioritizing digital transformation efforts, will be well-served to keep up with competitors going forward.


We’d like to thank Rupinder for taking the time to speak with us and share her insights on the state of data in 2021.

To echo her insight, data-driven organizations are those that have data and analytics embedded in their business processes, departments, and people. The organizations who reap the benefits of both internal and external data-sharing will be positioned to thrive in the upcoming years.

Want to learn more about aligning your business and data strategy? Request a consultation with one of our data experts or browse the largest catalog of solution ready data to determine how ThinkData’s tech can advance your projects. 

Source Prolead brokers usa

ai in seo 5 ways to use ai to boost website rankings
AI in SEO: 5 Ways to Use AI to Boost Website Rankings

Artificial intelligence (AI) has taken the business world by storm with its innovative applications in almost every field. In fact, 50% of businesses use AI in some form or another.

SEO is no exception to this and the use of AI in SEO is growing multifold. From content optimization to link building, AI is being used for SEO in multiple ways.

But, why is it important to use AI in SEO and how does that improve website performance?

Find out the answer in the following sections.

Why is It Important to Use AI for SEO?

Artificial intelligence plays an important role in SEO from keyword research to content optimization. Even the search engines use AI for their ranking algorithms and Google’s RankBrain and BERT are good examples of this.

Using AI in SEO will make the ranking process more efficient.

AI and SEO together form a powerful combination that can help improve website rankings and overall search performance.

Most tasks that you need to do for SEO, like keyword research and optimized content creation, can be done better using AI.

Want to learn how AI can help with SEO?

Find that out in the next section.

5 Ways to Use AI for Search Engine Optimization

Now that you know that AI is important for SEO, let’s understand how exactly it helps.

In this section, you will learn how to use AI for SEO to improve your website’s search rankings and performance. Use these as a guide for using AI for SEO.

Ready to get started?

Here you go.

1. Find the Right Keywords

One of the most common uses of AI in SEO is to find the right keywords to target through your website content.

There are many AI-based keyword research tools that you can use to find relevant keywords in your niche. These tools can also help you select the right ones using metrics like search volume, keyword difficulty, etc.

You can also explore your competitors’ keywords and find keyword opportunities that they might not be utilizing. 

Doing all of these manually would have been an almost impossible and time-consuming feat, but the use of AI makes it extremely easy.

Also, don’t forget to use the right target keywords for your landing pages. Just using the right platforms for building landing pages isn’t enough. You also need to optimize your page content for the right keywords.

For example, a call center that provides different types of customer services can create a variety of landing pages by targeting multiple keywords. By leveraging AI-powered tools, they can find the right keywords to target for each of these pages too. This way they would be able to rank for each keyword that they target. 

2. Create and Optimize Content

AI is not only good for finding keywords but also for creating content that is optimized for SEO. 

You can use these insights to create better content on trending topics and outrank your competitors. Furthermore, visual content has the power to attract an audience Different types of infographics, graphs and charts, screenshots, photos can rapidly improve user engagement.

You can use these insights to create better content on trending topics and outrank your competitors.

You can further optimize your content by using AI-powered online editing and keyword optimization tools like Grammarly and Surfer.

3. Discover Link-Building Opportunities

Another important aspect where AI helps with SEO and website optimization is finding link-building opportunities. 

There are numerous AI-powered SEO tools that you can use for this purpose.

Semrush, for example, provides a brilliant Backlink Audit tool that provides you with opportunities for link building. It will show you a list of sites where you have published content but have not asked for a backlink.

You can use this list for your link-building outreach campaigns and get quick backlinks without much effort. This is one of the easiest tactics to build links and a brilliant example of how using AI in SEO makes life easier.

4. Optimize Your Site for Voice Searches

One of the lesser-known uses of AI in SEO is to optimize your website content for voice searches.

How does AI help with that?

AI-based tools like AnswerThePublic can tell you what questions people ask around a keyword. So, instead of targeting normal keywords, you can target question-type long keywords, which is what people use for voice searches.

Here’s an example of this tool at work:

Image Source

5. Conduct SEO Audits

One of the most important, yet complex, tasks for SEO is to conduct regular site audits.

Site audits are important to find and fix issues that may affect your website’s search performance. Such issues could be anything from broken links to duplicate content and should be fixed promptly.

This is where AI can help you.

AI-based SEO tools like Semrush and other alternative tools can conduct extensive site audits within minutes, if not seconds. These tools can provide you with a ready-to-use list of SEO issues that need to be addressed and also sort them out by priority.

Image Source

Conclusion

AI is the future of SEO and if you want a competitive edge, you need to embrace it early on. 

The uses of AI in SEO are numerous and can make the entire process quicker and more efficient for you. Start using AI-powered SEO tools to leverage this technology and improve your website’s rankings and overall performance.

Ready to embrace AI to grow your business?

All the best!

Source Prolead brokers usa

how does machine learning and retail demand forecasting promote business growth
How does Machine Learning and Retail Demand Forecasting promote business growth

Machine learning in retail demand forecasting has transformed the retail industry. The primary aim of using machine learning in demand forecasting is to predict the future demand in products and services accurately so that the product can be designed and re-designed accordingly. It is one of the unique Retail IT Solutions that reduce product wastage while mitigating inventory issues. Furthermore, it accelerates the data, eliminates the stock-out situations, provides accurate forecasts, and speeds up the process of unstructured data analyses.

 

What are the principles for Machine learning demand forecasting?

 

ML technology principles are based on Machine learning forecasting. The ML-backed software first understands the business’s demand patterns and then conducts predictions of the future targets using its self-learning capabilities. The forecast generated by the software benefits both users and the company.

 

Now let’s learn how retail demand forecasting uses machine learning to promote customer satisfaction and businesses growth.

 

Automation in demand forecasting

Implementation of machine learning solutions in demand forecasting automates the entire process. Forecasting demand is a time consuming and tedious process and involves several people. There are high chances of human error in the traditional method of demand forecasting. Even the slightest of the mistakes may result in a huge loss of both brand value and money for the retail business owners.

 

The combination of demand forecasting and machine learning allows retailers to save their resources and time. Plus, machine learning in retail gives accurate results while eliminating the need for human assistance or specialists.

 

Optimal accuracy in planning inventory

 

Accuracy is one of the crucial factors of forecasting in retail demand planning. Businesses need to use a validated forecasting method that comes with a guarantee of accuracy. The prediction accuracy is essential because it helps executives determine the demand value of their products and services in the market. And the slightest inaccuracy can disrupt the entire process.

 

Advanced machine learning algorithms and demand forecasting together automate the tasks involved in inventory management. It relieves the retailers from the hassle of managing stock. The inventories are sorted according to the user interests, requirements and profit value. It is one of the primary reasons for implementing ML solutions in demand forecasting.

Enhanced business and customer relationship

 

It is another significant benefit of using machine learning in retail. As mentioned earlier, machine learning accurately forecasts and predicts the needs of users well in advance. The scenario eliminates conditions such as old products selling or stock-out. Predictions and forecasting can be used to know about the future preferences of customers. It contributes to an enhanced relationship between the brand and customers as the business will provide products/services in trend.

 

Increase in profitability of the business

 

Using machines in demand forecasting helps retail businesses in increasing their profitability. The customized ML solutions automate demand forecasting and reduce operational costs by predicting an ideal time for product sales. Eventually, the companies can ensure optimized and better business operations. In addition, it helps them save a huge amount that otherwise goes into hiring talent and compensation of the losses.

 

Effective sales and advertising campaigns

 

Demand forecasting and machine learning together can handle sales and marketing campaigns efficiently. The advanced ML software helps in analyzing the marketing trends. Plus, it also predicts the future demand and market conditions. As a result, retail businesses can determine what improvements in their product and services need to stay in demand among the customers.

Using this crucial info, the businesses can save cost and time and increase sales and potential leads, which is otherwise difficult with the traditional method. Thus, the implementation of machine learning in retail forecasting provides countless benefits besides access to future demands.

 

Machine Learning makes it easier to adapt to change

 

ML demand forecasting can sense the forecast and update the programme according to the available data. In other words, the forecast can be updated on a weekly or daily basis according to the change in the data of stock, warehouse, etc. The program uses recent actual data to regenerate and run a new forecast. The metrics and accuracy of the latest estimates can also be calculated using the base forecast. Comparisons can be made with the updated and old results, allowing the executives to review performance, demand and factors.

 

Concluding Word

Machine learning isn’t limited to sales and demand forecasting. It has eased forecasting future trends, customer engagement, marketing campaigns, brand development, financial risks, resource usage, etc. The potential of this futuristic technological tool depends on how well the retail business owners take advantage. As of the current scenario, machine learning in demand forecasting adds value to businesses and helping them to meet customer demands. In a nutshell, there are endless possibilities for this technology. It also opens opportunities for retail business owners.

Source Prolead brokers usa

Pro Lead Brokers USA | Targeted Sales Leads | Pro Lead Brokers USA Skip to content