Search for:
feedforward neural networks
Feedforward Neural Networks

Deep learning technology has become indispensable in the domain of modern machine interaction, search engines, and mobile applications. It has revolutionized modern technology by mimicking the human brain and enabling machines to possess independent reasoning. Although the concept of deep learning extends to a wide range of industries, the onus falls on software engineers and ML engineers to create actionable real-world implementations around those concepts. This is where the Feedforward Neural Network pitches in. 

The simplified architecture of Feedforward Neural Networks presents useful advantages when employing neural networks individually to achieve moderation or cohesively to process larger, synthesized outputs. 

Today, we’ll dive deep into the architecture of feedforward neural network and find out how it functions. So, let’s dive right in!

Feedforward Neural Networks are artificial neural networks where the node connections do not form a cycle. They are biologically inspired algorithms that have several neurons like units arranged in layers. The units in neural networks are connected and are called nodes. Data enters the network at the point of input, seeps through every layer before reaching the output. However, the connections differ in strength or weight. The weight of the connections provides vital information about a network.  

Feedforward Neural Networks are also known as multi-layered networks of neurons (MLN). The neuron network is called feedforward as the information flows only in the forward direction in the network through the input nodes. There is no feedback connection so that the network output is fed back into the network without flowing out. 

These networks are depicted through a combination of simple models, known as sigmoid neurons. The sigmoid neuron is the foundation for a feedforward neural network.

Here’s why feedforward networks have the edge over conventional models: 

  • Conventional models such as Perceptron take factual inputs and render Boolean output only if the data can be linearly separated. This means the positive and negative points should be positioned at the two sides of the boundary. 
  • The selection of the best decision to segregate the positive and the negative points is also relatively easier. 
  • The output from the sigmoid neuron model is smoother than that of the perceptron. 
  • Feedforward neural networks overcome the limitations of conventional models like perceptron to process non-linear data efficiently using sigmoid neurons. 
  • With convolutional neural networks and recurrent neural networks delivering cutting-edge performance in computer science, they are finding extensive use in a wide range of fields to solve complex decision-making problems.

The feedforward neural networks comprise the following components:

  • Input layer
  • Output layer
  • Hidden layer
  • Neuron weights
  • Neurons
  • Activation function

Input layer: This layer comprises neurons that receive the input and transfer them to the different layers in the network. The number of neurons in the input layer must be the same as the number of the features or attributes in the dataset.

Output layer: This layer is the forecasted feature that depends on the type of model being built. 

Hidden layer: The hidden layers are positioned between the input and the output layer. The number of hidden layers depends on the type of model. Hidden layers have several neurons that impose transformations on the input before transferring. The weights in the network are constantly updated to make it easily predictable.

Neuron weights: The strength or the magnitude of connection between two neurons is called weights. The input weights can be compared just as coefficients in linear regression. The value of the weights is usually small and falls within the range of 0 to 1. 

Neurons: The feedforward network has artificial neurons, which are an adaptation of biological neurons. Artificial neurons are the building blocks of the neural network. The neurons work in two ways: first, they determine the sum of the weighted inputs, and, second, they initiate an activation process to normalize the sum. 

The activation function can be either linear or nonlinear. Weights are related to each input of the neuron. The network studies these weights during the learning phase.

Activation Function: This is the decision-making center at the neuron output. The neurons finalize linear or non-linear decisions based on the activation function. It prevents the enlargement of neuron outputs due to cascading effect because of passing through many layers. The three most important activation functions are sigmoid, Tanh, and Rectified Linear Unit ( ReLu).

  • Sigmoid: It maps the input values within the range of 0 to 1.
  • Tanh: It maps the input values between -1 and 1.
  • Rectified linear Unit: This function allows only the positive values to flow through. The negative values are mapped at 0. 

Data travels through the neural network’s mesh. Each layer of the network acts as a filter and filters outliers and other known components, following which it generates the final output.

  • Step 1: A set of inputs enter the network through the input layer and are multiplied by their weights. 
  • Step 2: Each value is added to receive a summation of the weighted inputs. If the sum value exceeds the specified limit ( usually 0), the output usually settles at 1. If the value falls short of the threshold ( specified limit), the result will be -1. 
  • Step 3: A single-layer perceptron uses the concepts of machine learning for classification. It is a crucial model of a feedforward neural network. 
  • Step 4: The outputs of the neural network can then be compared with their predicted values with the help of the delta rule, thereby facilitating the network to optimize its weights through training to obtain output values with better accuracy. This process of training and learning generates a gradient descent. 
  • Step 5: In multi-layered networks, updating weights are analogous and more specifically defined as backpropagation. Here, each hidden layer is modified to stay in tune with the output value generated by the final layer. 

The operation on this network can be divided into two phases: 

First: Learning Phase

This is the first phase of the network operation, during which the weights in the network are adjusted. The weights are modified to make sure the output unit has the largest value.

The feedforward network uses a supervised learning algorithm that enhances the network to know not just the input pattern but also the category to which the pattern belongs. The pattern gets modified as it passes through other layers until the output layer. The units present in the output layer will be of different categories. 

The output values will be compared with the ideal values of the pattern under the correct category. The output unit with the right category will have the largest value than the other units. The connection weights are modified according to this to make sure the unit with the correct category re-enters the network as the input. This is known as back-propagation. 

The length of the learning phase depends on the size of the neural network, the number of patterns under observation, the number of epochs, tolerance level of the minimizer, and the computing time (that depends on the computer speed). 

Second: Classification Phase

The weights of the network remain the same (fixed) during the classification phase. The input pattern will be modified in every layer till it lands on the output layer. The classification is done based on a selection of categories related to the output unit that has the largest value. The feedforward network must be selected along with a list of patterns to perform the classification process. The classification phase is much faster than the learning phase. 

  • The simplified architecture of Feed Forward Neural Network offers leverage in machine learning. 
  • A series of Feedforward networks can run independently with a slight intermediary to ensure moderation.
  • The network requires several neurons to carry out complicated tasks. 
  • The handling and processing of non-linear data can be done easily with a neural network that is otherwise complex in perceptron and sigmoid neurons. 
  • The excruciating decision boundary problem is alleviated in neural networks. 
  • The architecture of the neural network can be of different types based on the data. For instance, a convolutional neural network (CNNs) has registered exceptional performance in image processing, whereas recurrent neural networks (RNNs) are highly optimized for text and voice processing. 
  • Neural networks require massive computational and hardware performance for handling large datasets, and hence, they require graphics processing units (GPUs). Kaggle Notebooks and Google Collab Notebooks are two popular GPUs used extensively in the market.

Given that we’ve only scratched the surface of deep learning technology, it holds huge potential for innovation in the years to come. Naturally, the future scope of deep learning is very promising. 

In fact, neural networks have gained prominence in recent years following the emerging role of Artificial Intelligence in various fields. Since deep learning models are capable of mimicking human reasoning abilities to overcome faults through exposure to real-life examples, they present a huge advantage in problem-solving and are witnessing growing demand. 

From image and language processing applications to forecasting, speech and face recognition, language translation, and route detection, artificial neural networks are being used in various industries to solve complex problems.

Source Prolead brokers usa

how to build an ai center of excellence
How to Build an AI Center of Excellence

Artificial intelligence is transforming life as we know it. The COVID-19 pandemic has further accelerated automation and increased enterprise investment in AI across the globe. The global AI market size is expected to grow to USD 309.6 billion by 2026, at a Compound Annual Growth Rate (CAGR) …. While there is greater adoption of AI worldwide, scaling AI projects is not easy. According to Gartner, the staff skills, the fear of the unknown, and finding a clear starting point for AI projects are some of the most prominent challenges faced by enterprises in their AI journey. To overcome these challenges and launch new AI initiatives, executives are setting up dedicated AI knowledge platforms or AI Center of Excellence (AI CoE) within their organization. As per a Harvard Business Review article, 37% of U.S. executives from large firms that use AI have already established a COE in AI.

What is an AI Center of Excellence (AI CoE)? 

An AI Center of Excellence (AI CoE) is a centralized knowledge group or team that guides and oversees the implementation of organization-wide AI projects. An AI CoE brings together the AI talent, knowledge, and resources required to enable AI-based transformation projects. It brings together all the AI capabilities needed to address the challenges of AI adoption and prioritize AI investments. AI COE essentially serves as an internal centralized counsel to identify new opportunities for leveraging AI to solve various business problems such as controlling costs, improve efficiency, and optimize revenue. The key objective of setting up an AI CoE is to build and support the AI vision of the company and serve as an internal counsel to manage all AI projects.

Why should you build an AI CoE?

An AI CoE plays a vital role in developing AI talent and driving innovation within the company. The team acts as an internal counsel for guiding the company on all AI initiatives from prioritizing AI investments to identifying high-value use cases for implementation. By providing a robust framework for AI implementation, the CoE helps in building future-ready engineering capabilities to manage high volumes of data, improve efficiency, and drive innovation. Here are the key benefits of creating an AI CoE:

  • To consolidate AI resources, learnings, and talent in one place.
  • To create and implement a unified AI vision and strategy for the business.
  • To standardize the platforms, processes, and approach to AI within the organization.
  • To speed up AI-led innovation and identify new revenue opportunities.
  • To scale data science efforts and make AI accessible to every function within the company.
  • To drive AI-enabled initiatives such as cost reduction, churn prevention, and revenue maximization to stay ahead of competitors.

How to build an AI CoE? 

Technology is constantly evolving. Enterprises must continuously adapt their AI roadmap to deliver the highest business value. With scattered data science teams, resources, and legacy systems, it becomes hard to know where to start. To set up an AI innovation center, business leaders must take a holistic approach encompassing all the factors that contribute to its success.

The key pillars of an AI Center of Excellence 

A CoE or innovation center is built on the following four primary pillars:

1. Strategy: Strategy helps in clearly defining the business goals, identifying the high-impact use cases, and prioritizing the AI investments. When you are starting with AI, while it is okay to think big, it is wise to start with small and smart, achievable, realistic AI goals. This will allow you to progress quickly and gauge the ROI from the AI initiatives.

2. People: The people-oriented pillar helps define the data-driven culture and the strategies for managing the teams that use AI and data within the organization. It is important to clearly define the roles, data owners, and structures for driving AI-led innovation. The people strategy also helps in hiring the right AI talent and fostering a collaborating, innovation-driven culture.

3. Processes: The process-oriented pillar helps define the methods to sustain continual AI innovation within the company. The more iterative and agile the process is, the easier it is to learn fast and adapt to fast-changing business needs.

4. Technology: The right technology stack is crucial for building robust AI capabilities. The tech strategy should clearly define the process for evaluation, and adoption of new tools that match the organizational needs and work well with the existing IT infrastructure.

Best practices and tips to build an AI CoE 

If executed with leadership commitment and a very well-planned strategy, AI has the power to reward organizations with non-linear rewards, however, if not operationalized with the right fundamentals and critical mass scale, it could become a bottomless pit of investments with no significant returns. Here are some best practices and tips to set up an AI CoE:

1. Set AI vision and measurable goals 

Leaders need to identify the key business objectives they want to achieve with AI such as improving conversion, reducing churn, etc. These core goals help in prioritizing the AI investments to be made and in identifying the most high-impact use cases to be implemented first. You also need to develop a transparent and compressive system to track the progress and measure the benefits of their AI initiatives. By capturing the benchmarks and the KPIs for the AI experiments, enterprises can gauge the value generated and do course corrections early on if required.

2. Assemble the right team and set up governance

People are the core strength of an AI CoE. Once you have identified the business problems to solve, you need to onboard the right talent to the core innovation team. Roles and responsibilities will have to be clearly defined. Leaders will also have to set up governance to oversee the development of the CoE.

3. Get your data ready for AI 

AI is only as smart as the data used to train it. Enterprises must invest in robust data collection, cleaning, storage, management, and validation mechanisms to ensure that the data used is reliable and ready for AI.

4. Standardize and create reusable AI assets

Based on the existing systems, the business objectives, and high-value AI use cases, companies must invest in the necessary tools and infrastructure required to apply AI. By creating scalable, flexible, and reusable AI assets, enterprises can apply their AI solutions to multiple scenarios and derive more value.

5. Democratize AI and collaborate with no-code platforms

Good ideas can come from anywhere. To drive AI-led innovation, you need a collaborative, data-driven work culture and the right AI tools. A no-code AI platform can enable anyone in the organization to use AI and apply their perspective to solve complex business problems. In addition to democratizing AI, these platforms help in standardizing AI operations within the company. Leaders must also initiate AI and data science education across functions to nurture an AI-first culture.

The final verdict

In today’s competitive market, AI has become a necessity and a key enabler for growth. No-code AI platforms such as HyperSense can accelerate the adoption and democratization of AI across the organization. It puts the power of AI in the hands of the non-technical business user, eliminating the traditional challenges of misaligned objectives, skill shortage, and siloed operations.

Flexible and modular AI platforms play a vital role in broadening organization-wide AI adoption. A dedicated AI CoE and the right AI platform can facilitate the launch of AI initiatives, accelerate AI implementation, improve efficiencies, and ultimately help enterprises to achieve their long-term AI goals.

For more details on HyperSense AI platform, please email us at [email protected] or visit our website, www. hypersense.subex.com

Author: Arundeep Sivaraj

Source Prolead brokers usa

why we shouldnt expect a metaverse anytime soon part i of ii
Why we shouldn’t expect a metaverse anytime soon (Part I of II)

Castaneda, Eduardo. Carl Sagan with the planets. 1981. Photograph. https://www.loc.gov/item/cosmos000104/. (Photo taken on the set of the TV show Cosmos.)

“The Metaverse” is a term appearing more and more in the tech press lately. Which begs the question, what does the term really mean? What does the concept actually imply from a feasibility standpoint? And most importantly, what are the prospects for something like a metaverse to appear and gain adoption?

The vision of a metaverse or mirrorworld 

“The Metaverse” is a term coined by author Neal Stephenson for his 2003 book Snow Crash. In the book, Stephenson presents the Metaverse as an alternate, more manipulable and explicitly informative virtual and augmented reality. That overlay provided a pervasive, interactive, digital complement to physical reality. The term gained popularity among tech enthusiasts beginning in the 2000s.

More recently, folks like Mark Zuckerberg have co-opted the term. According to Zuckerberg as quoted by Kyle Chayka of The New Yorker, “The metaverse is ‘a virtual environment where you can be present with people in digital spaces. It’s ‘an embodied Internet that you’re inside of rather than just looking at. We believe that this is going to be the successor to the mobile Internet.’” https://www.newyorker.com/culture/infinite-scroll/facebook-wants-us-to-live-in-the-metaverse 

Facebook clearly hopes to expand the market for its Oculus goggles and its advertising, and is using the metaverse metaphor to help describe to its users how they might shop, game and interact more immersively online. It’s a narrow, frankly self-serving view of what a metaverse might consist of.

Since Stephenson’s book was published, other terms have cropped up that describe a comparable concept. Kevin Kelly, for example, introduced his concept of a “Mirrorworld” (a term hardly original to him) in an article in Wired magazine in March 2019.

In a discussion with Forbes contributor Charlie Fink onstage at Augmented World Expo in August 2019, Kelly said,“Mirrorworld is a one-to-one map of the real world that’s in digital form. That is, there’s a digital skin right over the real world that can be revealed using AR. It’s at the same scale and in the same place, so it’s a skin. Or you could say it’s a ghost or it’s embedded in the same way.” https://arinsider.co/2019/08/02/xr-talks-what-on-earth-is-mirrorworld/

During the discussion, Kelly and Fink both agreed that it could well take 25 years for the Mirrorworld to fully emerge as a commonly useful and scaled out “digital skin.”

Digital twins imagined

More explicitly, Kelly and Fink were really talking about the presentation layer or digital skin as 3D representations of people, places, things and concepts, and how they interact. Fink previously used the term metaverse in a similar context a year earlier. Either the mirrorworld or the metaverse, in other words, starts with interacting, interoperable representations known as digital twins (a term often used by academic MIchael Grieves beginning in 2002 in a product lifecycle manufacturing context).

At present, useful digital twins behave in limited, purpose-specific ways like the physical objects they represent. Manufacturers–GE, Schneider Electric and Siemens, for example–create digital twins of equipment such as gas turbines designed for use in power plants to predict failures at various points in each product’s lifecycle. Those twins help the manufacturers refine those designs. 

But the longer-term vision is more expansive. All of these companies also have promising, but still nascent, initiatives in the works that could potentially expand and further connect digital twins in use. Consider, for instance, Siemens’ concept–at least four years old now–for an industrial knowledge graph that could empower many digital twins with relationship-rich, contextualizing data:

Some organizations envision even larger and more intricately interactive and useful aggregations of digital twins. The UK’s Centre for Digital Built Britain(CDBB), for instance, imagines “the entire built environment as an interconnected system if digital twins are developed to be interoperable, secure and connected.” 

Digital models of government agencies, when up and running, could conceivably be interconnected as a combination of models to provide views of past, present and future states of how these agencies interact in different contexts. For their part, asset-intensive industries such as pharmaceuticals, consumer products, transportation and agriculture are all working toward whole supply chain digitization. 

Unfortunately, most data and knowledge management shops are not versed in the kind of web scale data management the UK’s CDBB hopes to harness for these organization-sized digital twins. Nor do they have the authority or the budget to scale out their capabilities.

The Data (and Logic) Challenges of Interactive Virtual and Augmented Things

Stephenson, Kelly, and Fink’s visions are inspiring, but they are mainly imagining the possible from the perspective of users, without exploring the depths of the challenges organizations face to make even standalone twins of components of the vision a reality. 

Data siloing and logic sprawl have long been the rule, rather than the exception, and companies are even less capable of managing data and limiting logic sprawl than they were during the advent of smartphones and mobile apps in the late 2000s and early 2010s.(See https://www.datasciencecentral.com/profiles/blogs/how-business-can-clear-a-path-for-artificial-general-intelligence for more information.)

What has this meant to the typical information-intensive enterprise? A mountain of 10,000+ databases to manage, thousands of SaaS and other subscriptions to oversee, custom code and many operating systems, tools, and services to oversee. All of these are spread across multiple clouds and controlled by de facto data cartels, each of which claims control based on its role in purposing the data early on in the provenance of that data. 

Despite long-term trends that have helped with logic and data sharing such as current and future generations of the web, data is now more balkanized than ever, and the complexity is spiraling out of control.

The truth is, most of this out-of-control legacy + outsourced, application-centric cloud stack gets in the way of what we’re trying to do in a data-centric fashion with the help of knowledge graphs and a decentralized web.

Getting beyond data silos and code sprawl to a shared data and logic foundation

As you can imagine, moving beyond an entrenched paradigm that schools have been reinforcing for 40 years requires stepping back and rethinking how the “digital ecosystem”, such as it is, needs to be approached with a more critical eye. For starters, claims of automated data integration that are everywhere need to be considered with a grain of salt. The truth is, data agreement is hard, data integration is even harder, and interoperability is the hardest. And there are no interactive digital twins without interoperability.

So there are tiers of difficulty the data and knowledge science community has to confront. Predictably, the more capability needed, the more care and nurturing a knowledge foundation requires. In simple terms, the tiers can be compared and contrasted as follows:

  • Agreement. Compatibility with and consistency of two or more entities or datasets
  • Integration. Consolidation within a single dataset that provides broad data access and delivery
  • Interoperability. Real-time data exchange and interpretation between different systems that share a common language. This exchange and interpretation happens in a way that preserves the originating context of that data.

In Part II, I’ll delve more into the knowledge foundation that will be necessary for a shared metaverse or mirrorworld to exist.

Source Prolead brokers usa

ai led medical data labeling for coding and billing
AI-Led Medical Data Labeling For Coding and Billing

The Healthcare sector is among the largest and most critical service sectors, globally. Recent events like the Covid-19 pandemic have furthered the challenge to handle medical emergencies with contemplative capacity and infrastructure. Within the healthcare domain, healthcare equipment supply and usage have come under sharp focus during the pandemic. The sector continues to grow at a fast pace and will record a 20.1% CAGR of surge; plus, it is estimated to surpass $662 billion by 2026.

Countries like the US spend a major chunk of their GDP on healthcare. The sector is technologically advanced and futuristic and the amount spent per person annually is higher in comparison to any other country in the world. There are general acute care hospitals, government hospitals, specialty, non-profit, and privately owned hospitals as well. Healthcare funding includes private, governmental, and public insurance claims and finance processes. The US private healthcare is dominated by private medicare facilities, in which the costs are borne by the patients majorly.

Underlying challenges in the sector’s digital transformation

The Healthcare sector has its challenges to deal with. Depending on legacy apps and following conventional procedures for treating patients has resulted in a lot of revenue losses. Hospital revenues have taken a setback and even though the EHR (electronic health record) system is implemented yet granular information in a physician’s clinical summary is often difficult to record and maintain.

Then, the medical billing and claim procedure is yet another tough turf to manage. On part of the healthcare institutions, maintaining a seamless patient experience has become crucial. Additionally, the process of translating a patient’s medical details and history into coding allows healthcare institutions and payers to track and monitor a patient’s medical current condition, manage history and update correct records. The process is lengthy and the slightest human error can result in discrepancies in the patient’s history and financial transactions for the treatment received. It can further disrupt the claims and disbursal process for all future transactions, posing risk to tracking a patient’s current medical condition. Not merely this, it can create additional hassles for medical practitioners, healthcare institutions, and insurance providers to process and settle claims.

Moreover, in terms of tracking and providing appropriate treatment, health practitioners and institutions, on the other hand, are faced with the persistent challenge of collating patient’s data from multiple sources and analyzing it manually.

Building medical ai models with reliable training dataset

The healthcare sector deals with gigantic data, that is sensitive to patient’s health and also impacts physician’s credibility.

For a long time, health institutions have invested significantly in managing patient records and relied on software and costly legacy applications, which have their own limitations. Meanwhile, hiring of trained professionals or medical coders and outsourcing of service platforms have always added to the spending. Implementation of EHR systems has improved the fundamental processes yet, the technical limitations have made it difficult for the medical sector to rely on them, entirely. This has led to delays in accessing patients’ treatment history, suggesting effective treatment & care, billing, and processing of medical claims; eventually, hurting revenue growth for the health institutions and other players in the chain.

To handle all such scenarios, the healthcare community has aggressively adopted Artificial Intelligence enabled with machine learning with NLP for automation of key processes. AI and machine learning programs are automating procedures like medical coding and reducing the cycle of patient care. In more than 100 countries the clinical summaries are converted into codes. AI and NLP or natural language processing-based programs trained with structured medical training data are helping doctors access patient’s history based on the medical codes, instantly without delay. The effectiveness of AI-based results are reducing patient visits, stress on the doctors, and improving the entire lifecycle of patient experience with doctors, health service providers, and medical claim payers.

In addition to this, the AI-enabled processes are also easing out the pressure on all the stakeholders in the loop, with tasks like revealing out-of-pocket expenses to patients before availing of the healthcare services. This has helped patients plan their expenditure, beforehand. It has also eased out pre-authorization procedures and fastened the entire cycle of patient care by the healthcare provider. Firms like Cogito are actively developing medical coding and billing training data with the help of a specialized team of in-house medical practitioners to deliver cutting-edge data labeling services. In terms of medical billing, AI programs powered with machine learning and structured training data are ensuring efficient revenue cycles and preventing the claim denials based on incorrect or missing information of the patient.

Endnote:

Recent AI implementations have helped healthcare institutions provide proactive support to patients and tackle significant revenue loss, in the process. For the healthcare domain, Artificial Intelligence is letting patients claim payers, and healthcare service providers work in tandem, accelerating overall sectoral growth. AI-led automation powered with NLP is saving time and costs up to 70%; including overall costs. From availing a health service to identifying the right health institution for treatment, both patients and doctors are gaining immense value from the transformation. Originally Published at – Healthcare document processing

Source Prolead brokers usa

career opportunities in blockchain and the top jobs you need to know
Career Opportunities in Blockchain and The Top Jobs You Need to Know

Blockchain expertise is one of the fastest-growing skills and demand for blockchain professionals is picking up momentum in the USA. Since crypto-currencies are doing well for the last few years and many investors are looking at investing in them, the demand for blockchain engineers is growing. Blockchain technology certifications have become very popular in the last few years.

The Growing Demands for Blockchain Specialists

Demand for Blockchain professionals is increasing and blockchain technology certifications are quite popular courses in institutes and universities. Globally, according to Glassdoor, the demand for blockchain professionals grew by 300% in 2019 as compared to 2018 and this increase is expected to grow in the years to come.

With the arrival of cryptocurrencies, this disruptive technology called blockchain became a hot-cake in the IT world. It is estimated that by 2023, $15.9 billion would have been invested in blockchain solutions worldwide. Now to attract that kind of investment, there should be an ample number of blockchains.

The food and agriculture industry and many other industries are developing solutions based on blockchain technology. With the rising focus on digital transformation, this technology is set to enter many other industries beyond the financial sector.

The Top Blockchain Jobs you Need to Know About

• Blockchain Developer- Blockchain developers need to have a lot of experience with C++, Python, and JavaScript programming languages. There are many companies like IBM, Cygnet Global Solutions, and AirAsia, etc. which are hiring blockchain developers across the globe.

• Blockchain Quality Engineer- In the Blockchain world, a blockchain engineer assures that the quality assurance is in place and the testing before releasing the product is done thoroughly. Since spikes in buying and selling trends in a day are common in the blockchain industry, the platforms need to be robust to take the load.

• Blockchain Project Managers- Blockchain project managers mostly come from a development background of C, C++, Java and they possess excellent communication skills. They need to have hands-on experience as a traditional (cloud) project manager. If they understand the complete software development life-cycle (SDLC) of the blockchain world, they can get a highly paid job in the market.

• Blockchain Solution Architect- The Blockchain Solution Architect is responsible for designing, assigning, connecting, and hand-holding in implementing all the blockchain solution components with the team experts. The blockchain developers, network administrators, blockchain engineers, UX designers, and IT Operations need to work in tandem as per the SDLC plan and execute the blockchain solution.

• Blockchain Legal Consultant- Since blockchains in the financial sector involves the exchange of money in the form of crypto-currencies; it is obvious that there will be a lot of litigation and abnormalities. Since these currencies operate in peer-to-peer networks and Governments have very limited visibility to the transactions, there will be a lot of fraudulent activities, money-laundering cases which will happen through them. Hence there will always be demand for blockchain legal consultants, as it is a new field. It needs an in-depth understanding of the technology and how the financial world revolves around it.

Top companies hiring blockchain professionals

• IBM- They hire blockchain developers, blockchain solution architects across the globe in their offices.
• Topal- This is a great platform for freelance blockchain engineers and developers to select their projects and do them from any part of the globe. There are many fortune 500 companies that are exploring the options of hiring freelance blockchain developers.
• Gemini Trust Company, LLC- New York, USA
• Circle Internet Financial Limited- Boston, Massachusetts, USA
• Coinbase Global Inc. – No headquarter

Top Blockchain Certifications for Professionals in 2021

CBCA certifications (Business Blockchain Professional, Certified Blockchain Engineer)
• Certified Blockchain Expert courses from IvanOnTech are quite economical and ideal for beginners.
• Blockchain Certification Course (CBP) — EC Council (International Council of E-Commerce Consultants)

Source Prolead brokers usa

how business can clear a path for artificial general intelligence
How business can clear a path for artificial general intelligence

20 years ago, most CIOs didn’t care much about “data”, but they did care about applications and related process optimization. While apps and processes were where the rubber met the road, data was ephemeral. Data was something staffers ETLed into data warehouses and generated reports from, or something the spreadsheet jockeys worked with. Most of a CIO’s budget was spent on apps (particularly application suites) and the labor and supporting networking and security infrastructure to manage those apps. 

In the late 2000s, and early 2010s, the focus shifted more to mobile apps. Tens of thousands of large organizations, who’d previously listened to Nick Carr tell them that IT didn’t matter anymore, revived their internal software development efforts to build mobile apps. And/or they started paying outsiders to build mobile apps for them.

At the same time, public cloud services, APIs, infrastructure as a service (IaaS) and the other the Xs as a service (XaaS) began to materialize. By the late 2010s, most large organizations had committed to “migrating to the cloud.” What that meant in practice meant that most large businesses (meaning 250 or more employees) began to subscribe to hundreds, if not thousands, of software-as-a-service (SaaS) offerings, in addition to one or more IaaSes, PaaSes, DBaaSes and other XaaSes.

Business departments subscribed directly to most of these SaaSes with the help of corporate credit cards. Often a new SaaS suggests a way department managers can get around IT bureaucracy and try to solve the problems IT doesn’t have the wherewithal to address. But the IT departments themselves backed the major SaaS subscriptions, in any case–these were the successors of application suites. Networking and application suites, after all, were what IT understood best.

One major implication of XaaS and particularly SaaS adoption is that businesses in nearly all sectors (apart from the dominant tech sector) of the economy have even less control of their data now than they did before. And the irony of it all is that in 99 percent of the subscriptions, according to Gartner, any SaaS security breaches through 2024 will be considered the customer’s fault–regardless of how sound the foundation for the SaaS is or how it’s built.

The end of AI winter…and the beginning of AI purgatory

20+ years into the 21st century, “data” shouldn’t just be an afterthought anymore. And few want it that way. All sorts of  hopes and dreams, after all, are pinned on data. We want the full benefits of the data we’re generating, collecting, analyzing, sharing and managing, and of course we plan to generate and utilize much more of it as soon as we can.  

AI, which suffered through three or four winters over the past decades because of limits on compute, networking and storage, no longer has to deal with major low temperature swings. Now it has to deal with prolonged stagnation.

The problem isn’t that we don’t want artificial general intelligence (AGI). The problem is that the means of getting to AGI requires the right kind of data-centric, system-level transformation, starting with a relationship-rich data foundation that also creates, harnesses, maintains and shares logic currently trapped in applications. In other words, we have to desilo our data and free most of the logic trapped inside applications. 

Unless data can be unleashed in this way, contextualized and made more efficiently and effectively interoperable, digital twins won’t be able to interact or be added to systems. Kevin Kelly’s Mirrorworld will be stopped dead in its tracks without the right contextualized, disambiguated and updated data to feed it.

As someone who tracked many different technologies and how they are coming together, there’s a particular feeling of frustration that brought to mind a story I read awhile back. 117 years ago, The Strand Magazine published a short story by H.G. Wells — a parable, really — called “The Country of the Blind”. The parable is about a mountain climber named Nunez who has all of his faculties, including sight. Nunez after a big climb falls down the remote slope of a South American mountain. He ends up in a village consisting entirely of blind people.

Once Nunez lives among the villagers for a bit and learns they’re all blind, Nunez assumes he can help them with his gift of sight. “In the Country of the Blind, the one-eyed man is king,” he repeats to himself. 

The thing is, the villagers don’t believe in the sense of sight. They don’t take any stock in what Nunez is saying. Finally, realizing he can’t help them, Nunez leaves and starts to climb back up the mountain. In a 1939 update to the story, Nunez notices a rockslide starting, and calls out to the villagers to save themselves. They again ignore him. The rockslide buries the village.

The data-centric architecture that AGI requires

Nowadays, data can be co-mingled with some logic in standard knowledge graphs. In these knowledge graphs, description logic, (including relationship logic, which contextualizes the data so it can be shared and reused) and rule logic together with data become a discoverable, shared and highly scalable resource, a true knowledge graph that allows the modeling and articulation of business contexts that AGI demands..

With a properly formed knowledge graph as the foundation, data-centric rather than application-centric architecture, as Dave McComb of Semantic Arts pointed out in Software Wasteland and the Data-Centric Revolution, can become a means of reducing complexity, rather than adding to it. Even more importantly, these knowledge graphs are becoming a must-have for scalable AI operations, in any case.

Sounds great, doesn’t it? The thing is, even though these methods have been around for years now, I’d guess that less than one percent of those responsible for AI in enterprise IT are aware of the broad knowledge graph-based data and knowledge management possibilities that exist, not to mention the model-driven development possibilities. This is despite the fact that 90 percent of the companies with the largest market capitalizations in the world have been using knowledge graphs for years now.

Why such a low level of awareness? Most are too preoccupied with the complexities of cloud services and how to take best advantage of them to notice. If they are innovating on the AI/data science front, they’re consumed with small, targeted projects rather than how systems need to evolve. 

17 years after Nick Carr’s book Does IT Matter? was published, most are still focused on the faucet, when the rest of the Rube Goldberg-style data plumbing has long ago become rusty and obsolete and should have been switched out years ago.

Should software as a service come with a warning label?

Which begs the question–why should we trust the existing system at all, if as McComb points out, it’s so duplicative, antiquated, and wasteful? It’s easy to make the case that we shouldn’t: Considering that so few know about how true data-centric systems should be designed, commercial software should come with prominently placed warning labels, just like drugs do.That’s even more the case with software that’s data-dependent–”AI”-enhanced applications, for instance.

This is not to mention that many B2C SaaSes are basically giant data farming, harvesting and personalized advertising operations, platforms that require high data-dependent machine-learning loops. From a privacy and security perspective, much of the risk comes from data farming, storing, duplicating and distributing. Most major B2B SaaSes, for their part, are designed to take best advantage of customer data–but only within the SaaS or the providers’ larger system of systems.

Shouldn’t we put warning labels on SaaSes if billions of people are using them, especially while those SaaSes are using archaic forms of data and logic management that don’t scale? Some of the necessary cautions to those who dare to use today’s more ominous varieties of AI-enhanced, data-dependent and data-farming software that come to mind include these:

Suggested warning to new users

Principle to counter implicit risk

This new AI-enhanced technology, even more than most previous software technologies in use, has been developed in a relative vacuum, without regard to broader, longer-term consequences or knock-on effects. 

Murphy’s Law (anything that can go wrong, will go wrong) applies when adding this software to operational systems.Relevant backup plans and safety measures should be put in place prior to use.

This social media platform utilizes advanced gamification techniques designed to encourage high levels of use. When excessive use is coupled with the platform’s inherently broad disinformation campaign distribution abilities at web scale, the risk of sociopolitical destabilization may increase dramatically.

The principle of Occam’s Razor (the simplest answer is usually correct) is a good starting point for any assessment of widely believed but easily discreditable theories and assertions (e.g., “COVID-19 isn’t real”) in postings you come across, like, or share that depend on high numbers of assumptions.Consider carefully the disinformation proliferation risks that will result.

A typical medium to large-sized organization is likely to have subscriptions to hundreds or thousands of software-as-a-service platforms (SaaSes). 

Are you sure you want to subscribe to yet another one and add even more complexity to your employer’s existing risk, compliance, application rationalization and data management footprint?

McComb’s Cost of Interdependency Principle applies: “Most of the cost of change is not in making the change; it’s not in testing the change; it’s not in moving the change into production. 

“Most of the cost of change is in impact analysis and determining how many other things are affected. The cost of change is more about the systems being changed than it is about the change itself.” Please don’t keep adding to the problem.

This form requires the input of correlatable identifiers such as social security, home address and mobile phone numbers that could afterwards be duplicated and stored in as many as 8,300 or more different places, depending on how many different user databases we decide to create, ignore the unnecessary duplication of, or sell off to third-party aggregators we have no control over and don’t have the resources or desire to monitor in any case.

The Pareto Principle (80 percent of the output results from 20 percent of the input) may apply to the collectors of personally identifiable information (PII) management, i.e.,one out of five of the sites we share some PII with might be responsible for 80 percent of that PII’s proliferation. Accordingly, enter any PII here at your own risk.

As you might be able to guess from a post that’s mostly a critique of existing software, this blog won’t be about data science, per se. It will be about what’s needed to make data science thrive. It will be about what it takes to shift the data prep burden off the shoulders of the data scientist and onto the shoulders of those who’ll build data-centric systems that for the most part don’t exist yet. It will be about new forms of data and logic management that, once in place, will be much less complicated and onerous than the old methods. 

Most of all, this blog will be about how systems need to change to support more than narrow AI. All of us will need to do what we can to make the systems transformation possible, at the very least by providing support and guidance to true transformation efforts. On their own, systems  won’t change the way we need them to, and incumbents on their own certainly won’t be the ones to change them.

Source Prolead brokers usa

binding cloud plm 2 0 and industry 4 0 into cohesive digital transformation
Binding Cloud, PLM 2.0, and Industry 4.0 into cohesive digital transformation

In the environment of Industry 4.0, the role of PLM is expanding. The interplay between PLM and Industry 4.0 technologies – like the Internet of Things (IoT), Big Data, Artificial Intelligence (AI), Machine Learning, AR/VR, Model-Based Enterprise (MBE), and 3D Printers – is on the rise. But Industry 4.0 is unlike any previous industrial revolution. The last three, from 1.0 to 3.0, were aimed at driving innovation in manufacturing. 4.0 is different. It is changing the way of thinking. And PLM is at the heart of this new way of thinking.

Industry 4.0 is marked by pervasive connectedness. Smart devices, sensors, and systems are connected, creating a digital thread across Supply Chain Management (SCM), Enterprise Resource Planning (ERP), and Customer Experience (CX) applications. This demands that new digital PLM solutions be placed at the core, making it the key enabler of digital transformation.

However, organizations cannot take a big bang approach to digital transformation – or, by implication, to PLM. Issam Darraj, ELIT Head Engineering Applications at ABB Electrification, says that organizations need to take this one step at a time. They need to first build the foundation for digital transformation, then create a culture that supports it. They should invest in skills and collaboration, focus on change management, become customer-centric, and should be able to sell anytime anywhere. Simultaneously, PLM must evolve into PLM 2.0 as well.

PLM 2.0 is widely seen as a platform whose responsibility does not end when a design is handed over to manufacturing. PLM 2.0 impacts operations, marketing, sales, services, end-of-life, recycling, etc. What began as an engineering database with MCAD and ECAD, is now an enabler of new product design, with features such as Bill of Material, collaboration, and release processes rolled into it.

As the role of PLM evolves, it is moving to Cloud. We believe that SaaS PLM is the future. This is because Cloud is central to Industry 4.0.  With connected systems and products sending back a flood of real-time data to design, operations, and support functions, Cloud has become the backbone for data and to drive real-time decisions. Organizations that were once using Cloud to bring down costs must change that focus. Availability and scalability should be the primary considerations.

Digital Transformation, Industry 4.0 technologies, PLM and Cloud are complex pieces of the puzzle. Most organizations need partners who understand every individual piece of the puzzle and know how to bring them together to create a picture of a successful, competitive, and customer-focused organization. An experienced partner will be able to connect assets, create data-rich decisioning systems, leverage Industry 4.0 technologies, and leverage Cloud to expand the role of PLM.

Author:

Sundaresh Shankaran

President, Manufacturing & CPG,

ITC Infotech

Source Prolead brokers usa

responding to supply chain uncertainty with intelligent planning
Responding to Supply Chain Uncertainty with Intelligent Planning

Supply chain disruptions are anything but uncommon. 2018 saw 2,629 disruption events being reported.[i] In 2019, 51.9% of firms experienced supply chain disruptions, with 10% experiencing six or more disruptions.[ii] Largely, these were man-made (79%). Most had a low severity (63%). Over time it has become evident that the ability of organizations to negotiate medium to high disruptions (37%) is low. In 2019, the impact of this inability was a loss of € 100 M for 1 in 20 organizations, with the average annual cost of disruption being € 10.5 M.[iii] Significantly, on average it took organizations 28 weeks to recover from the disruption.[iv] The numbers don’t reveal the entire picture. But in themselves, they indicate a lack of resilience and an urgent need for organizations to predict, assess and mitigate disruptions with greater agility.

The real picture is that the cost of disruption is being under-reported. For many organizations the ability to accurately compute the cost of response, opportunity cost, and reputational damage are absent. The gaps in capability go beyond this. While most organizations can negotiate minor disruptions through safety stocks and long-term contracts, it is disruptions with higher severity and frequency that present overwhelming challenges. These gaps can be bridged by:

  • Creating early warning capabilities
  • Anticipating the impact of events on supply chains
  • Understanding the financial impact and transmitting it across value chains

The essence of resilience

The only way to beat today’s volatility and unpredictability are to bring speed and agility through connected, real-time systems. Organizations take days to realize they have been hit by disruption, board room discussion on risk assessment, and to formulate management strategy run into another handful of days (see Figure 1 for the net saving in response time that can be achieved between traditional and agile responses to disruption). All the while the organization is bleeding, losing business, customers, and reputation. Reducing the response time from 28 weeks to 16 weeks is the essence of building resilience in supply chain planning.

Figure 1

Modern technology can help crash time-to-respond by almost 60%, enabling faster risk simulation and helping prepare an optimal response. This is easier said than done. Most organizations are stumped by the roadblocks they face to resilience. Bob Debicki, Senior Director, CPG & Retail, Anaplan, says that “Organizations are ill-equipped to handle disruption.” Debicki observes that only 22.6% of organizations use technology to map full tier-n networks. This means there is a severe lack of visibility into what is happening within supply chains. 73% of organizations still rely on spreadsheets to manage supply chain risks, indicating that applying intelligent technology can make a dramatic change and build resilience. Organizations can change their approach by building:

  1. Communication and visibility through supplier and trading partner collaboration
  2. Establishing interlinkages across n-tier supply networks
  3. Creating early sensing capability by extracting insights from systems of records
  4. Using technology for rapid simulation and scenario planning ahead of disruptions using a more granular level of information accessible with higher frequency
  5. Enabling a cohesive response to disruptions through a connected planning ecosystem

But many businesses are getting comfortable with the existing baseline for forecasting. Unfortunately, that baseline is no longer valid in light of the disruption created by the COVID-19 pandemic. Early sensing capabilities using real-time information coupled with granular information (down to SKU and location) have become essential if organizations are to survive the shock of events such as the outbreak of COVID-19.

Breaking out of siloes

Many of the required capabilities to withstand extreme disruption already exist within organizations. But they exist in siloes. Organizations need to bring all their capacities and data onto a single platform to create the perfect response.

By converging the data organizations can build the three pillars of intelligent planning central to resilience (see figure 2 for details): deep visibility into the supply chain that helps develop an interactive view of risk, the ability to simulate scenarios, and the ability to respond through connected planning. The goal of convergence should be to sense fast, simulate, assess the impact on supply chains and bottom lines, and respond at speed with optimal mitigation strategies.

Figure 2

What should organizations focus on during disruptions such as those triggered by COVID-19? There are four focus areas we recommend: demand sensing, accountability, new tools for planners that go beyond existing algorithms, and connecting stakeholders (see figure 3 for details of the 4 focus areas during disruption)

Figure 3

The smart supply chain solution for complete resilience

Smart organizations are building resilience by connecting their systems to internal and external data sources, to other business areas such as finance, and integrating with real-time data services. Customers with an eye on the future are using ITC Infotech’s supply chain solutions to use data to arrive at risk scores for potential disruptions, understand the severity of the event, and simulate scenarios for any risk ranging from power outages to supplier bankruptcy, labor shortages, natural disasters, political upheavals (example: Brexit), foreign exchange rate fluctuations, etc. The ITC Infotech system’s dashboards show supplier performance (defect rate, productivity, etc.) and ranking to identify potential problems such as cost escalations, impact on lead times, and capacity. Supply chain teams can simulate a variety of parameters – such as event probability and duration of the event – to reach accurate decisions related to managing shortfall, order plans, etc.

As pressure on organizations grows—as it is bound to even after COVID-19 is past us—there will be a need to get on the path of dynamic supply chains. These can make granular adjustments in real-time using scenario-based forecasting. 

A complete shift in perspective is necessary, from history-based forecasting to lead indicator-based decision-making through a connected and collaborative system. The future of agile supply chain systems has been cast. The faster organizations embrace intelligent planning, the more they will be able to face extreme disruptions with ease.

[i] https://info.resilinc.com/eventwatch-2018-annual-report

[ii] https://www.thebci.org/resource/bci-supply-chain-resilience-report-…

[iii] https://www.thebci.org/resource/bci-supply-chain-resilience-report-…

[iv] https://info.resilinc.com/eventwatch-2018-annual-report

 

Author Details:

 

Bob Debicki,

Sr Director, CPG & Retail,

 Anaplan

 

Amit Paul

Senior Principal Consultant,

ITC Infotech India Ltd

 

Source Prolead brokers usa

dsc weekly digest 24 august 2021
DSC Weekly Digest 24 August 2021

It’s Gartner Hype Cycle Day! This is roughly analogous to the Oscars for emerging technologies, in which the Gartner Group looks through their telescopes to see what could end up being the next big thing over the course of the next decade. The idea behind the hype cycle is an intriguing concept – emerging technologies seem to come out of nowhere, explode into prominence while the technology is still less than fully mature, then seemingly disappear for a while as the promise of the hype gives way to the reality of implementation and real-world problems. Once these are worked out, the technology in question gets adopted into the mainstream. 

Gartner Hype Cycle for 2021

The hype cycle is frequently the source of investor bingo, where young startups pick a term that most closely reflects their business model, then pitch it to angel investors to provide supporting evidence that their product is worth investing in. It also gets picked up by the marketing departments of existing companies that are working at marketing their newest offerings to customers, For this reason, the Gartner Hype Cycle (GHC) should always be taken as a speculative guide, not a guarantee.

Nonetheless, from the standpoint of data science, this year’s graph is quite intriguing. The GHC looks at emerging technologies, which in general can be translated to mean that even the most immediate items on the curve are at least two years out, with the rest either between five and ten years out or beyond. In the immediate term, what emerges is that distributed identity, of individuals, organizations (LEIs), or things (NFTs) will become a critical part of any future technologies. This makes sense – these effectively provide the hooks that connect the physical world and its virtual counterpart, and as such become essential in the production of digital twins, one of the foundations of the metaverse. Similarly, generative AI, which takes data input and uses that to create relevant new content,  has obvious implications for virtual reality (which is increasingly coming under the heading of Multiexperience).

A second trend that can be discerned is the shift away from application development as a predominantly human activity to become instead something that is constructed by a subject matter expert, data analyst, or decision-maker. This is the natural extension of the DevOps movement, which took many of the principles of Agile and concentrated primarily on automating that which could be automated. This can also be seen in the rise of composable applications and networks. Indeed, this simply continues a trend where the creation of specific algorithms by software developers is being replaced by the development of models by analysts, and as more of that becomes automated, the next logical step is the compartmentalization of such analysis within generated software and configurable pipelines.

The final trend, and one that looks to become a reality around 2035, is the long term integration of social structures with the emerging physical structures. For instance, decentralized finance is seen as still being some ways out, even with blockchain and similar distributed ledger technology becoming pervasive today. This is because finance, while technologically augmented at this point, still involves social integration, something which doesn’t even remotely exist at this stage. Similarly machine-readable (and potentially writeable) legislation falls into the domain of social computing at a very deep level, and requires a level of trust-building that looks to be at least a decade out or more.

All of these revolve around graphs in one way or another, to the extent that we may now be entering into the graph era, in which the various social, financial, civic, and even ecological graphs can be more readily articulated. The biggest barrier to implementing the metaverse, ultimately, will be graph integration, and it is likely that this, coupled with increasing integration of graph and machine learning technologies, will dominate over the next couple of decades.

One of the left-most (meaning most speculative) technologies listed here is Quantum ML. If the period from 1990 to 2015 could be considered the relational age (defined by SQL), and 2015 to 2040 is the graph age, then quantum machine learning will likely end up marking the end of the graph era and the beginning of the quantum data era. This is perhaps significant, as 2040 also marks the beginning of the singularity by many different measures, something I will be writing more about in-depth soon. 

In media res,

Kurt Cagle
Community Editor,
Data Science Central

To subscribe to the DSC Newsletter, go to Data Science Central and become a member today. It’s free! 

Source Prolead brokers usa

new study warns get ready for the next pandemic
New Study Warns: Get ready for the next pandemic

  • A new study warns we’re likely to see another major pandemic within the next few decades.
  • New database of pandemic info used to calculate increased probability.
  • A major pandemic will likely wipe out human life within 12,000 years.

For much of the past century, the fear has been that a calamity like an asteroid strike, supernova blast, or environmental change would wipe out humanity. But new research, published by Duke Global Health Institute, is pointing to a much different demise for humans. The study, called Intensity and frequency of extreme novel epidemics and published in the ‘Proceedings of the National Academy of Sciences’, used almost four hundred years’ worth of newly assembled data to make some dire predictions [1].

The authors used the new database along with estimated rates of zoonotic diseases (those transmitted from animals to humans) emerging because of environmental change caused by humans. “These effects of anthropogenic environmental change,” they warn, “may carry a high price,”

A New Database Paints a Picture

One of the reasons that there has been a distinct lack of research into the probability of another pandemic has been a lack of access to data, short observational records, and stationary analysis methods.

The conventional theory of extremes, like major pandemics, assumes that the process of event occurrence is stationary—where shifts in time don’t cause a change in the shape of the distribution.  But the authors found that pandemic data was nonstationary. While long term observations and analysis tools for nonstationary processes were available in other disciplines, global epidemiological information on the topic was “fragmented and virtually unexplored”.  

The team addressed the problem, in part, by creating a new database containing information from four centuries of disease outbreaks, which they have made publicly available in the Zenodo repository along with the MATLAB code that analyzed it [3].  A snapshot of the database is shown below:

The database, which contains data from 182 historical epidemics, led the authors to conclude that while the rate of epidemics varies wildly over time, the tail of the probability distribution for epidemic intensity (defined as number of deaths, divided by global population and epidemic duration) slowly decays. The implication is that the probability of another extreme epidemic decreases slowly with epidemic intensity. However, this doesn’t mean that the probability of another epidemic is smaller—it’s just the opposite.

When the authors combined the model with increasing rates of disease emergence from animal reservoirs linked to environmental change, they found that the probability of observing another serious pandemic–currently a lifetime risk of around 38%—will likely double in the next few decades.

A New Pandemic is Around the Corner

Novel pathogens like Covid-19 have been emerging in human population at an increasing rate in the last half a century. This new study estimates that the probability of a novel disease outbreak will grow from its current risk of about 2% a year to around three times that. The researchers used that risk factor to estimate another major pandemic will very likely happen within 60 years—which is much sooner than originally anticipated—making it very likely you will see another major pandemic in your lifetime.

That’s not to say you’ll have to wait until you’re 80-years-old to see another nefarious virus sweep across the globe. The event is equally probable in any one year during that time frame, said Duke University professor Gabriel Katul, Ph.D., one of the paper’s authors. “When a 100-year flood occurs today, one may erroneously presume that one can afford to wait another 100 years before experiencing another such event,” says Katul. “This impression is false. One can get another 100-year flood the next year.”

In addition to warning about the perils of ignoring human-induced environmental changes, the authors extrapolated the data to make another dire prediction: In the press release [2], they state that it’s statistically likely that within the next 12,000 years, the human race will die out due to a major pandemic, which means it’s extremely unlikely mankind will be around when the next extinction-level asteroid hits Earth.

References

Mask picture Tadeáš Bednarz, CC BY-SA 4.0 via Wikimedia Commons

[1] Intensity and frequency of extreme novel epidemics

[2] Statistics say large pandemics are more likely than we thought

[3] A Global Epidemics Dataset (1500-2020)

Source Prolead brokers usa

Pro Lead Brokers USA | Targeted Sales Leads | Pro Lead Brokers USA Skip to content