Search for:
ai led medical data labeling for coding and billing
AI-Led Medical Data Labeling For Coding and Billing

ai led medical data labeling for coding and billingThe Healthcare sector is among the largest and most critical service sectors, globally. Recent events like the Covid-19 pandemic have furthered the challenge to handle medical emergencies with contemplative capacity and infrastructure. Within the healthcare domain, healthcare equipment supply and usage have come under sharp focus during the pandemic. The sector continues to grow at a fast pace and will record a 20.1% CAGR of surge; plus, it is estimated to surpass $662 billion by 2026.

Countries like the US spend a major chunk of their GDP on healthcare. The sector is technologically advanced and futuristic and the amount spent per person annually is higher in comparison to any other country in the world. There are general acute care hospitals, government hospitals, specialty, non-profit, and privately owned hospitals as well. Healthcare funding includes private, governmental, and public insurance claims and finance processes. The US private healthcare is dominated by private medicare facilities, in which the costs are borne by the patients majorly.

Underlying challenges in the sector’s digital transformation

The Healthcare sector has its challenges to deal with. Depending on legacy apps and following conventional procedures for treating patients has resulted in a lot of revenue losses. Hospital revenues have taken a setback and even though the EHR (electronic health record) system is implemented yet granular information in a physician’s clinical summary is often difficult to record and maintain.

Then, the medical billing and claim procedure is yet another tough turf to manage. On part of the healthcare institutions, maintaining a seamless patient experience has become crucial. Additionally, the process of translating a patient’s medical details and history into coding allows healthcare institutions and payers to track and monitor a patient’s medical current condition, manage history and update correct records. The process is lengthy and the slightest human error can result in discrepancies in the patient’s history and financial transactions for the treatment received. It can further disrupt the claims and disbursal process for all future transactions, posing risk to tracking a patient’s current medical condition. Not merely this, it can create additional hassles for medical practitioners, healthcare institutions, and insurance providers to process and settle claims.

Moreover, in terms of tracking and providing appropriate treatment, health practitioners and institutions, on the other hand, are faced with the persistent challenge of collating patient’s data from multiple sources and analyzing it manually.

Building medical ai models with reliable training dataset

The healthcare sector deals with gigantic data, that is sensitive to patient’s health and also impacts physician’s credibility.

For a long time, health institutions have invested significantly in managing patient records and relied on software and costly legacy applications, which have their own limitations. Meanwhile, hiring of trained professionals or medical coders and outsourcing of service platforms have always added to the spending. Implementation of EHR systems has improved the fundamental processes yet, the technical limitations have made it difficult for the medical sector to rely on them, entirely. This has led to delays in accessing patients’ treatment history, suggesting effective treatment & care, billing, and processing of medical claims; eventually, hurting revenue growth for the health institutions and other players in the chain.

To handle all such scenarios, the healthcare community has aggressively adopted Artificial Intelligence enabled with machine learning with NLP for automation of key processes. AI and machine learning programs are automating procedures like medical coding and reducing the cycle of patient care. In more than 100 countries the clinical summaries are converted into codes. AI and NLP or natural language processing-based programs trained with structured medical training data are helping doctors access patient’s history based on the medical codes, instantly without delay. The effectiveness of AI-based results are reducing patient visits, stress on the doctors, and improving the entire lifecycle of patient experience with doctors, health service providers, and medical claim payers.

In addition to this, the AI-enabled processes are also easing out the pressure on all the stakeholders in the loop, with tasks like revealing out-of-pocket expenses to patients before availing of the healthcare services. This has helped patients plan their expenditure, beforehand. It has also eased out pre-authorization procedures and fastened the entire cycle of patient care by the healthcare provider. Firms like Cogito are actively developing medical coding and billing training data with the help of a specialized team of in-house medical practitioners to deliver cutting-edge data labeling services. In terms of medical billing, AI programs powered with machine learning and structured training data are ensuring efficient revenue cycles and preventing the claim denials based on incorrect or missing information of the patient.

Endnote:

Recent AI implementations have helped healthcare institutions provide proactive support to patients and tackle significant revenue loss, in the process. For the healthcare domain, Artificial Intelligence is letting patients claim payers, and healthcare service providers work in tandem, accelerating overall sectoral growth. AI-led automation powered with NLP is saving time and costs up to 70%; including overall costs. From availing a health service to identifying the right health institution for treatment, both patients and doctors are gaining immense value from the transformation. Originally Published at – Healthcare document processing

Source Prolead brokers usa

career opportunities in blockchain and the top jobs you need to know
Career Opportunities in Blockchain and The Top Jobs You Need to Know

career opportunities in blockchain and the top jobs you need to know

Blockchain expertise is one of the fastest-growing skills and demand for blockchain professionals is picking up momentum in the USA. Since crypto-currencies are doing well for the last few years and many investors are looking at investing in them, the demand for blockchain engineers is growing. Blockchain technology certifications have become very popular in the last few years.

career opportunities in blockchain and the top jobs you need to know 1

The Growing Demands for Blockchain Specialists

Demand for Blockchain professionals is increasing and blockchain technology certifications are quite popular courses in institutes and universities. Globally, according to Glassdoor, the demand for blockchain professionals grew by 300% in 2019 as compared to 2018 and this increase is expected to grow in the years to come.

With the arrival of cryptocurrencies, this disruptive technology called blockchain became a hot-cake in the IT world. It is estimated that by 2023, $15.9 billion would have been invested in blockchain solutions worldwide. Now to attract that kind of investment, there should be an ample number of blockchains.

The food and agriculture industry and many other industries are developing solutions based on blockchain technology. With the rising focus on digital transformation, this technology is set to enter many other industries beyond the financial sector.

The Top Blockchain Jobs you Need to Know About

• Blockchain Developer- Blockchain developers need to have a lot of experience with C++, Python, and JavaScript programming languages. There are many companies like IBM, Cygnet Global Solutions, and AirAsia, etc. which are hiring blockchain developers across the globe.

• Blockchain Quality Engineer- In the Blockchain world, a blockchain engineer assures that the quality assurance is in place and the testing before releasing the product is done thoroughly. Since spikes in buying and selling trends in a day are common in the blockchain industry, the platforms need to be robust to take the load.

• Blockchain Project Managers- Blockchain project managers mostly come from a development background of C, C++, Java and they possess excellent communication skills. They need to have hands-on experience as a traditional (cloud) project manager. If they understand the complete software development life-cycle (SDLC) of the blockchain world, they can get a highly paid job in the market.

• Blockchain Solution Architect- The Blockchain Solution Architect is responsible for designing, assigning, connecting, and hand-holding in implementing all the blockchain solution components with the team experts. The blockchain developers, network administrators, blockchain engineers, UX designers, and IT Operations need to work in tandem as per the SDLC plan and execute the blockchain solution.

• Blockchain Legal Consultant- Since blockchains in the financial sector involves the exchange of money in the form of crypto-currencies; it is obvious that there will be a lot of litigation and abnormalities. Since these currencies operate in peer-to-peer networks and Governments have very limited visibility to the transactions, there will be a lot of fraudulent activities, money-laundering cases which will happen through them. Hence there will always be demand for blockchain legal consultants, as it is a new field. It needs an in-depth understanding of the technology and how the financial world revolves around it.

Top companies hiring blockchain professionals

• IBM- They hire blockchain developers, blockchain solution architects across the globe in their offices.
• Topal- This is a great platform for freelance blockchain engineers and developers to select their projects and do them from any part of the globe. There are many fortune 500 companies that are exploring the options of hiring freelance blockchain developers.
• Gemini Trust Company, LLC- New York, USA
• Circle Internet Financial Limited- Boston, Massachusetts, USA
• Coinbase Global Inc. – No headquarter

Top Blockchain Certifications for Professionals in 2021

CBCA certifications (Business Blockchain Professional, Certified Blockchain Engineer)
• Certified Blockchain Expert courses from IvanOnTech are quite economical and ideal for beginners.
• Blockchain Certification Course (CBP) — EC Council (International Council of E-Commerce Consultants)

Source Prolead brokers usa

how business can clear a path for artificial general intelligence
How business can clear a path for artificial general intelligence

how business can clear a path for artificial general intelligence

20 years ago, most CIOs didn’t care much about “data”, but they did care about applications and related process optimization. While apps and processes were where the rubber met the road, data was ephemeral. Data was something staffers ETLed into data warehouses and generated reports from, or something the spreadsheet jockeys worked with. Most of a CIO’s budget was spent on apps (particularly application suites) and the labor and supporting networking and security infrastructure to manage those apps. 

In the late 2000s, and early 2010s, the focus shifted more to mobile apps. Tens of thousands of large organizations, who’d previously listened to Nick Carr tell them that IT didn’t matter anymore, revived their internal software development efforts to build mobile apps. And/or they started paying outsiders to build mobile apps for them.

At the same time, public cloud services, APIs, infrastructure as a service (IaaS) and the other the Xs as a service (XaaS) began to materialize. By the late 2010s, most large organizations had committed to “migrating to the cloud.” What that meant in practice meant that most large businesses (meaning 250 or more employees) began to subscribe to hundreds, if not thousands, of software-as-a-service (SaaS) offerings, in addition to one or more IaaSes, PaaSes, DBaaSes and other XaaSes.

Business departments subscribed directly to most of these SaaSes with the help of corporate credit cards. Often a new SaaS suggests a way department managers can get around IT bureaucracy and try to solve the problems IT doesn’t have the wherewithal to address. But the IT departments themselves backed the major SaaS subscriptions, in any case–these were the successors of application suites. Networking and application suites, after all, were what IT understood best.

One major implication of XaaS and particularly SaaS adoption is that businesses in nearly all sectors (apart from the dominant tech sector) of the economy have even less control of their data now than they did before. And the irony of it all is that in 99 percent of the subscriptions, according to Gartner, any SaaS security breaches through 2024 will be considered the customer’s fault–regardless of how sound the foundation for the SaaS is or how it’s built.

The end of AI winter…and the beginning of AI purgatory

20+ years into the 21st century, “data” shouldn’t just be an afterthought anymore. And few want it that way. All sorts of  hopes and dreams, after all, are pinned on data. We want the full benefits of the data we’re generating, collecting, analyzing, sharing and managing, and of course we plan to generate and utilize much more of it as soon as we can.  

AI, which suffered through three or four winters over the past decades because of limits on compute, networking and storage, no longer has to deal with major low temperature swings. Now it has to deal with prolonged stagnation.

The problem isn’t that we don’t want artificial general intelligence (AGI). The problem is that the means of getting to AGI requires the right kind of data-centric, system-level transformation, starting with a relationship-rich data foundation that also creates, harnesses, maintains and shares logic currently trapped in applications. In other words, we have to desilo our data and free most of the logic trapped inside applications. 

Unless data can be unleashed in this way, contextualized and made more efficiently and effectively interoperable, digital twins won’t be able to interact or be added to systems. Kevin Kelly’s Mirrorworld will be stopped dead in its tracks without the right contextualized, disambiguated and updated data to feed it.

As someone who tracked many different technologies and how they are coming together, there’s a particular feeling of frustration that brought to mind a story I read awhile back. 117 years ago, The Strand Magazine published a short story by H.G. Wells — a parable, really — called “The Country of the Blind”. The parable is about a mountain climber named Nunez who has all of his faculties, including sight. Nunez after a big climb falls down the remote slope of a South American mountain. He ends up in a village consisting entirely of blind people.

Once Nunez lives among the villagers for a bit and learns they’re all blind, Nunez assumes he can help them with his gift of sight. “In the Country of the Blind, the one-eyed man is king,” he repeats to himself. 

The thing is, the villagers don’t believe in the sense of sight. They don’t take any stock in what Nunez is saying. Finally, realizing he can’t help them, Nunez leaves and starts to climb back up the mountain. In a 1939 update to the story, Nunez notices a rockslide starting, and calls out to the villagers to save themselves. They again ignore him. The rockslide buries the village.

The data-centric architecture that AGI requires

Nowadays, data can be co-mingled with some logic in standard knowledge graphs. In these knowledge graphs, description logic, (including relationship logic, which contextualizes the data so it can be shared and reused) and rule logic together with data become a discoverable, shared and highly scalable resource, a true knowledge graph that allows the modeling and articulation of business contexts that AGI demands..

With a properly formed knowledge graph as the foundation, data-centric rather than application-centric architecture, as Dave McComb of Semantic Arts pointed out in Software Wasteland and the Data-Centric Revolution, can become a means of reducing complexity, rather than adding to it. Even more importantly, these knowledge graphs are becoming a must-have for scalable AI operations, in any case.

Sounds great, doesn’t it? The thing is, even though these methods have been around for years now, I’d guess that less than one percent of those responsible for AI in enterprise IT are aware of the broad knowledge graph-based data and knowledge management possibilities that exist, not to mention the model-driven development possibilities. This is despite the fact that 90 percent of the companies with the largest market capitalizations in the world have been using knowledge graphs for years now.

how business can clear a path for artificial general intelligence

Why such a low level of awareness? Most are too preoccupied with the complexities of cloud services and how to take best advantage of them to notice. If they are innovating on the AI/data science front, they’re consumed with small, targeted projects rather than how systems need to evolve. 

17 years after Nick Carr’s book Does IT Matter? was published, most are still focused on the faucet, when the rest of the Rube Goldberg-style data plumbing has long ago become rusty and obsolete and should have been switched out years ago.

Should software as a service come with a warning label?

Which begs the question–why should we trust the existing system at all, if as McComb points out, it’s so duplicative, antiquated, and wasteful? It’s easy to make the case that we shouldn’t: Considering that so few know about how true data-centric systems should be designed, commercial software should come with prominently placed warning labels, just like drugs do.That’s even more the case with software that’s data-dependent–”AI”-enhanced applications, for instance.

This is not to mention that many B2C SaaSes are basically giant data farming, harvesting and personalized advertising operations, platforms that require high data-dependent machine-learning loops. From a privacy and security perspective, much of the risk comes from data farming, storing, duplicating and distributing. Most major B2B SaaSes, for their part, are designed to take best advantage of customer data–but only within the SaaS or the providers’ larger system of systems.

Shouldn’t we put warning labels on SaaSes if billions of people are using them, especially while those SaaSes are using archaic forms of data and logic management that don’t scale? Some of the necessary cautions to those who dare to use today’s more ominous varieties of AI-enhanced, data-dependent and data-farming software that come to mind include these:

Suggested warning to new users

Principle to counter implicit risk

This new AI-enhanced technology, even more than most previous software technologies in use, has been developed in a relative vacuum, without regard to broader, longer-term consequences or knock-on effects. 

Murphy’s Law (anything that can go wrong, will go wrong) applies when adding this software to operational systems.Relevant backup plans and safety measures should be put in place prior to use.

This social media platform utilizes advanced gamification techniques designed to encourage high levels of use. When excessive use is coupled with the platform’s inherently broad disinformation campaign distribution abilities at web scale, the risk of sociopolitical destabilization may increase dramatically.

The principle of Occam’s Razor (the simplest answer is usually correct) is a good starting point for any assessment of widely believed but easily discreditable theories and assertions (e.g., “COVID-19 isn’t real”) in postings you come across, like, or share that depend on high numbers of assumptions.Consider carefully the disinformation proliferation risks that will result.

A typical medium to large-sized organization is likely to have subscriptions to hundreds or thousands of software-as-a-service platforms (SaaSes). 

Are you sure you want to subscribe to yet another one and add even more complexity to your employer’s existing risk, compliance, application rationalization and data management footprint?

McComb’s Cost of Interdependency Principle applies: “Most of the cost of change is not in making the change; it’s not in testing the change; it’s not in moving the change into production. 

“Most of the cost of change is in impact analysis and determining how many other things are affected. The cost of change is more about the systems being changed than it is about the change itself.” Please don’t keep adding to the problem.

This form requires the input of correlatable identifiers such as social security, home address and mobile phone numbers that could afterwards be duplicated and stored in as many as 8,300 or more different places, depending on how many different user databases we decide to create, ignore the unnecessary duplication of, or sell off to third-party aggregators we have no control over and don’t have the resources or desire to monitor in any case.

The Pareto Principle (80 percent of the output results from 20 percent of the input) may apply to the collectors of personally identifiable information (PII) management, i.e.,one out of five of the sites we share some PII with might be responsible for 80 percent of that PII’s proliferation. Accordingly, enter any PII here at your own risk.

As you might be able to guess from a post that’s mostly a critique of existing software, this blog won’t be about data science, per se. It will be about what’s needed to make data science thrive. It will be about what it takes to shift the data prep burden off the shoulders of the data scientist and onto the shoulders of those who’ll build data-centric systems that for the most part don’t exist yet. It will be about new forms of data and logic management that, once in place, will be much less complicated and onerous than the old methods. 

Most of all, this blog will be about how systems need to change to support more than narrow AI. All of us will need to do what we can to make the systems transformation possible, at the very least by providing support and guidance to true transformation efforts. On their own, systems  won’t change the way we need them to, and incumbents on their own certainly won’t be the ones to change them.

Source Prolead brokers usa

binding cloud plm 2 0 and industry 4 0 into cohesive digital transformation
Binding Cloud, PLM 2.0, and Industry 4.0 into cohesive digital transformation

binding cloud plm 2 0 and industry 4 0 into cohesive digital transformation

In the environment of Industry 4.0, the role of PLM is expanding. The interplay between PLM and Industry 4.0 technologies – like the Internet of Things (IoT), Big Data, Artificial Intelligence (AI), Machine Learning, AR/VR, Model-Based Enterprise (MBE), and 3D Printers – is on the rise. But Industry 4.0 is unlike any previous industrial revolution. The last three, from 1.0 to 3.0, were aimed at driving innovation in manufacturing. 4.0 is different. It is changing the way of thinking. And PLM is at the heart of this new way of thinking.

Industry 4.0 is marked by pervasive connectedness. Smart devices, sensors, and systems are connected, creating a digital thread across Supply Chain Management (SCM), Enterprise Resource Planning (ERP), and Customer Experience (CX) applications. This demands that new digital PLM solutions be placed at the core, making it the key enabler of digital transformation.

However, organizations cannot take a big bang approach to digital transformation – or, by implication, to PLM. Issam Darraj, ELIT Head Engineering Applications at ABB Electrification, says that organizations need to take this one step at a time. They need to first build the foundation for digital transformation, then create a culture that supports it. They should invest in skills and collaboration, focus on change management, become customer-centric, and should be able to sell anytime anywhere. Simultaneously, PLM must evolve into PLM 2.0 as well.

PLM 2.0 is widely seen as a platform whose responsibility does not end when a design is handed over to manufacturing. PLM 2.0 impacts operations, marketing, sales, services, end-of-life, recycling, etc. What began as an engineering database with MCAD and ECAD, is now an enabler of new product design, with features such as Bill of Material, collaboration, and release processes rolled into it.

As the role of PLM evolves, it is moving to Cloud. We believe that SaaS PLM is the future. This is because Cloud is central to Industry 4.0.  With connected systems and products sending back a flood of real-time data to design, operations, and support functions, Cloud has become the backbone for data and to drive real-time decisions. Organizations that were once using Cloud to bring down costs must change that focus. Availability and scalability should be the primary considerations.

Digital Transformation, Industry 4.0 technologies, PLM and Cloud are complex pieces of the puzzle. Most organizations need partners who understand every individual piece of the puzzle and know how to bring them together to create a picture of a successful, competitive, and customer-focused organization. An experienced partner will be able to connect assets, create data-rich decisioning systems, leverage Industry 4.0 technologies, and leverage Cloud to expand the role of PLM.

Author:

Sundaresh Shankaran

President, Manufacturing & CPG,

ITC Infotech

Source Prolead brokers usa

responding to supply chain uncertainty with intelligent planning
Responding to Supply Chain Uncertainty with Intelligent Planning

responding to supply chain uncertainty with intelligent planning

Supply chain disruptions are anything but uncommon. 2018 saw 2,629 disruption events being reported.[i] In 2019, 51.9% of firms experienced supply chain disruptions, with 10% experiencing six or more disruptions.[ii] Largely, these were man-made (79%). Most had a low severity (63%). Over time it has become evident that the ability of organizations to negotiate medium to high disruptions (37%) is low. In 2019, the impact of this inability was a loss of € 100 M for 1 in 20 organizations, with the average annual cost of disruption being € 10.5 M.[iii] Significantly, on average it took organizations 28 weeks to recover from the disruption.[iv] The numbers don’t reveal the entire picture. But in themselves, they indicate a lack of resilience and an urgent need for organizations to predict, assess and mitigate disruptions with greater agility.

The real picture is that the cost of disruption is being under-reported. For many organizations the ability to accurately compute the cost of response, opportunity cost, and reputational damage are absent. The gaps in capability go beyond this. While most organizations can negotiate minor disruptions through safety stocks and long-term contracts, it is disruptions with higher severity and frequency that present overwhelming challenges. These gaps can be bridged by:

  • Creating early warning capabilities
  • Anticipating the impact of events on supply chains
  • Understanding the financial impact and transmitting it across value chains

The essence of resilience

The only way to beat today’s volatility and unpredictability are to bring speed and agility through connected, real-time systems. Organizations take days to realize they have been hit by disruption, board room discussion on risk assessment, and to formulate management strategy run into another handful of days (see Figure 1 for the net saving in response time that can be achieved between traditional and agile responses to disruption). All the while the organization is bleeding, losing business, customers, and reputation. Reducing the response time from 28 weeks to 16 weeks is the essence of building resilience in supply chain planning.

responding to supply chain uncertainty with intelligent planning 1

Figure 1

Modern technology can help crash time-to-respond by almost 60%, enabling faster risk simulation and helping prepare an optimal response. This is easier said than done. Most organizations are stumped by the roadblocks they face to resilience. Bob Debicki, Senior Director, CPG & Retail, Anaplan, says that “Organizations are ill-equipped to handle disruption.” Debicki observes that only 22.6% of organizations use technology to map full tier-n networks. This means there is a severe lack of visibility into what is happening within supply chains. 73% of organizations still rely on spreadsheets to manage supply chain risks, indicating that applying intelligent technology can make a dramatic change and build resilience. Organizations can change their approach by building:

  1. Communication and visibility through supplier and trading partner collaboration
  2. Establishing interlinkages across n-tier supply networks
  3. Creating early sensing capability by extracting insights from systems of records
  4. Using technology for rapid simulation and scenario planning ahead of disruptions using a more granular level of information accessible with higher frequency
  5. Enabling a cohesive response to disruptions through a connected planning ecosystem

But many businesses are getting comfortable with the existing baseline for forecasting. Unfortunately, that baseline is no longer valid in light of the disruption created by the COVID-19 pandemic. Early sensing capabilities using real-time information coupled with granular information (down to SKU and location) have become essential if organizations are to survive the shock of events such as the outbreak of COVID-19.

Breaking out of siloes

Many of the required capabilities to withstand extreme disruption already exist within organizations. But they exist in siloes. Organizations need to bring all their capacities and data onto a single platform to create the perfect response.

By converging the data organizations can build the three pillars of intelligent planning central to resilience (see figure 2 for details): deep visibility into the supply chain that helps develop an interactive view of risk, the ability to simulate scenarios, and the ability to respond through connected planning. The goal of convergence should be to sense fast, simulate, assess the impact on supply chains and bottom lines, and respond at speed with optimal mitigation strategies.

responding to supply chain uncertainty with intelligent planning 2

Figure 2

What should organizations focus on during disruptions such as those triggered by COVID-19? There are four focus areas we recommend: demand sensing, accountability, new tools for planners that go beyond existing algorithms, and connecting stakeholders (see figure 3 for details of the 4 focus areas during disruption)

responding to supply chain uncertainty with intelligent planning 3

Figure 3

The smart supply chain solution for complete resilience

Smart organizations are building resilience by connecting their systems to internal and external data sources, to other business areas such as finance, and integrating with real-time data services. Customers with an eye on the future are using ITC Infotech’s supply chain solutions to use data to arrive at risk scores for potential disruptions, understand the severity of the event, and simulate scenarios for any risk ranging from power outages to supplier bankruptcy, labor shortages, natural disasters, political upheavals (example: Brexit), foreign exchange rate fluctuations, etc. The ITC Infotech system’s dashboards show supplier performance (defect rate, productivity, etc.) and ranking to identify potential problems such as cost escalations, impact on lead times, and capacity. Supply chain teams can simulate a variety of parameters – such as event probability and duration of the event – to reach accurate decisions related to managing shortfall, order plans, etc.

As pressure on organizations grows—as it is bound to even after COVID-19 is past us—there will be a need to get on the path of dynamic supply chains. These can make granular adjustments in real-time using scenario-based forecasting. 

A complete shift in perspective is necessary, from history-based forecasting to lead indicator-based decision-making through a connected and collaborative system. The future of agile supply chain systems has been cast. The faster organizations embrace intelligent planning, the more they will be able to face extreme disruptions with ease.

[i] https://info.resilinc.com/eventwatch-2018-annual-report

[ii] https://www.thebci.org/resource/bci-supply-chain-resilience-report-…

[iii] https://www.thebci.org/resource/bci-supply-chain-resilience-report-…

[iv] https://info.resilinc.com/eventwatch-2018-annual-report

 

Author Details:

 

Bob Debicki,

Sr Director, CPG & Retail,

 Anaplan

 

Amit Paul

Senior Principal Consultant,

ITC Infotech India Ltd

 

Source Prolead brokers usa

dsc weekly digest 24 august 2021
DSC Weekly Digest 24 August 2021

It’s Gartner Hype Cycle Day! This is roughly analogous to the Oscars for emerging technologies, in which the Gartner Group looks through their telescopes to see what could end up being the next big thing over the course of the next decade. The idea behind the hype cycle is an intriguing concept – emerging technologies seem to come out of nowhere, explode into prominence while the technology is still less than fully mature, then seemingly disappear for a while as the promise of the hype gives way to the reality of implementation and real-world problems. Once these are worked out, the technology in question gets adopted into the mainstream. 

Gartner Hype Cycle for 2021

The hype cycle is frequently the source of investor bingo, where young startups pick a term that most closely reflects their business model, then pitch it to angel investors to provide supporting evidence that their product is worth investing in. It also gets picked up by the marketing departments of existing companies that are working at marketing their newest offerings to customers, For this reason, the Gartner Hype Cycle (GHC) should always be taken as a speculative guide, not a guarantee.

Nonetheless, from the standpoint of data science, this year’s graph is quite intriguing. The GHC looks at emerging technologies, which in general can be translated to mean that even the most immediate items on the curve are at least two years out, with the rest either between five and ten years out or beyond. In the immediate term, what emerges is that distributed identity, of individuals, organizations (LEIs), or things (NFTs) will become a critical part of any future technologies. This makes sense – these effectively provide the hooks that connect the physical world and its virtual counterpart, and as such become essential in the production of digital twins, one of the foundations of the metaverse. Similarly, generative AI, which takes data input and uses that to create relevant new content,  has obvious implications for virtual reality (which is increasingly coming under the heading of Multiexperience).

A second trend that can be discerned is the shift away from application development as a predominantly human activity to become instead something that is constructed by a subject matter expert, data analyst, or decision-maker. This is the natural extension of the DevOps movement, which took many of the principles of Agile and concentrated primarily on automating that which could be automated. This can also be seen in the rise of composable applications and networks. Indeed, this simply continues a trend where the creation of specific algorithms by software developers is being replaced by the development of models by analysts, and as more of that becomes automated, the next logical step is the compartmentalization of such analysis within generated software and configurable pipelines.

The final trend, and one that looks to become a reality around 2035, is the long term integration of social structures with the emerging physical structures. For instance, decentralized finance is seen as still being some ways out, even with blockchain and similar distributed ledger technology becoming pervasive today. This is because finance, while technologically augmented at this point, still involves social integration, something which doesn’t even remotely exist at this stage. Similarly machine-readable (and potentially writeable) legislation falls into the domain of social computing at a very deep level, and requires a level of trust-building that looks to be at least a decade out or more.

All of these revolve around graphs in one way or another, to the extent that we may now be entering into the graph era, in which the various social, financial, civic, and even ecological graphs can be more readily articulated. The biggest barrier to implementing the metaverse, ultimately, will be graph integration, and it is likely that this, coupled with increasing integration of graph and machine learning technologies, will dominate over the next couple of decades.

One of the left-most (meaning most speculative) technologies listed here is Quantum ML. If the period from 1990 to 2015 could be considered the relational age (defined by SQL), and 2015 to 2040 is the graph age, then quantum machine learning will likely end up marking the end of the graph era and the beginning of the quantum data era. This is perhaps significant, as 2040 also marks the beginning of the singularity by many different measures, something I will be writing more about in-depth soon. 

In media res,

Kurt Cagle
Community Editor,
Data Science Central

To subscribe to the DSC Newsletter, go to Data Science Central and become a member today. It’s free! 

Source Prolead brokers usa

new study warns get ready for the next pandemic
New Study Warns: Get ready for the next pandemic

new study warns get ready for the next pandemic

  • A new study warns we’re likely to see another major pandemic within the next few decades.
  • New database of pandemic info used to calculate increased probability.
  • A major pandemic will likely wipe out human life within 12,000 years.

For much of the past century, the fear has been that a calamity like an asteroid strike, supernova blast, or environmental change would wipe out humanity. But new research, published by Duke Global Health Institute, is pointing to a much different demise for humans. The study, called Intensity and frequency of extreme novel epidemics and published in the ‘Proceedings of the National Academy of Sciences’, used almost four hundred years’ worth of newly assembled data to make some dire predictions [1].

The authors used the new database along with estimated rates of zoonotic diseases (those transmitted from animals to humans) emerging because of environmental change caused by humans. “These effects of anthropogenic environmental change,” they warn, “may carry a high price,”

A New Database Paints a Picture

One of the reasons that there has been a distinct lack of research into the probability of another pandemic has been a lack of access to data, short observational records, and stationary analysis methods.

The conventional theory of extremes, like major pandemics, assumes that the process of event occurrence is stationary—where shifts in time don’t cause a change in the shape of the distribution.  But the authors found that pandemic data was nonstationary. While long term observations and analysis tools for nonstationary processes were available in other disciplines, global epidemiological information on the topic was “fragmented and virtually unexplored”.  

The team addressed the problem, in part, by creating a new database containing information from four centuries of disease outbreaks, which they have made publicly available in the Zenodo repository along with the MATLAB code that analyzed it [3].  A snapshot of the database is shown below:

new study warns get ready for the next pandemic

The database, which contains data from 182 historical epidemics, led the authors to conclude that while the rate of epidemics varies wildly over time, the tail of the probability distribution for epidemic intensity (defined as number of deaths, divided by global population and epidemic duration) slowly decays. The implication is that the probability of another extreme epidemic decreases slowly with epidemic intensity. However, this doesn’t mean that the probability of another epidemic is smaller—it’s just the opposite.

When the authors combined the model with increasing rates of disease emergence from animal reservoirs linked to environmental change, they found that the probability of observing another serious pandemic–currently a lifetime risk of around 38%—will likely double in the next few decades.

A New Pandemic is Around the Corner

Novel pathogens like Covid-19 have been emerging in human population at an increasing rate in the last half a century. This new study estimates that the probability of a novel disease outbreak will grow from its current risk of about 2% a year to around three times that. The researchers used that risk factor to estimate another major pandemic will very likely happen within 60 years—which is much sooner than originally anticipated—making it very likely you will see another major pandemic in your lifetime.

That’s not to say you’ll have to wait until you’re 80-years-old to see another nefarious virus sweep across the globe. The event is equally probable in any one year during that time frame, said Duke University professor Gabriel Katul, Ph.D., one of the paper’s authors. “When a 100-year flood occurs today, one may erroneously presume that one can afford to wait another 100 years before experiencing another such event,” says Katul. “This impression is false. One can get another 100-year flood the next year.”

In addition to warning about the perils of ignoring human-induced environmental changes, the authors extrapolated the data to make another dire prediction: In the press release [2], they state that it’s statistically likely that within the next 12,000 years, the human race will die out due to a major pandemic, which means it’s extremely unlikely mankind will be around when the next extinction-level asteroid hits Earth.

References

Mask picture Tadeáš Bednarz, CC BY-SA 4.0 via Wikimedia Commons

[1] Intensity and frequency of extreme novel epidemics

[2] Statistics say large pandemics are more likely than we thought

[3] A Global Epidemics Dataset (1500-2020)

Source Prolead brokers usa

understanding self supervised learning
Understanding Self Supervised Learning

understanding self supervised learning

In the last blog, we discussed the opportunities and risks of foundational models. Foundation models are trained on a broad dataset at scale and are adaptable to a wide range of downstream tasks. In this blog, we extend that discussion to learn about self-supervised learning, one of the technologies underpinning foundation models.

NLP has taken off due to Transformer-based pre-trained language models (T-PTLMs). Transformer-based models like GPT and BERT are based on transformers, self-supervised learning, and transfer learning. In essence, these models build universal language representations from large volumes of text data using self-supervised learning and then transfer this knowledge to subsequent tasks. This means that you do not need to train the downstream(subsequent) models from scratch.  

In supervised learning, training the model from scratch requires many labelled instances that are expensive to generate.  Various strategies have been used to overcome this problem. We can use Transfer learning to learn in one context and apply it to a related context. In this case, the target task should be similar to the source task. Transfer learning allows the reuse of knowledge learned in source tasks to perform well in the target task. Here the target task should be similar to the source task. The idea of transfer learning originated in Computer vision, where large pre-trained CNN models are adapted to downstream tasks by including few task-specific layers on top of the pre-trained model, which are fine-tuned on the target dataset.

Another problem was: Deep learning models like CNN and RNN cannot easily model long-term contexts. To overcome this problem, the idea of transformers was proposed. Transformers contain a stack of encoders and decoders, and they can learn complex sequences.

The idea of Transformer-based pre-trained language models (T-PTLMs) evolved by combining transformers and self-supervised learning (SSL) in the NLP research community. Self-supervised learning allows the transformers to learn based on the pseudo supervision provided by one or more pre-training tasks. GPT and BERT are the first T-PTLMs developed using this approach.  SSLs do not need a large amount of human-labelled data because they can learn from the pre-trained data.

Thus, Self-Supervised Learning (SSL) is a new learning paradigm that helps the model learn based on the pseudo supervision provided by pre-training tasks. SSLs find applications in areas like Robotics, Speech, and Computer vision. 

SSL is similar to both unsupervised learning and supervised learning but also different from both. SSL is similar to unsupervised learning in that it does not require human-labelled instances. However, SSL needs supervision via the pre-training stage (like supervised learning). 

In the next blog, we will continue this discussion by exploring a survey of transformer-based models

Source: Adapted from

AMMUS : A Survey of Transformer-based Pretrained Models in Natural …

Katikapalli Subramanyam Kalyan, Ajit Rajasekharan, and Sivanesan Sa…

Image source pixabay – Children learning without supervision

Source Prolead brokers usa

7 ways to show kindness with your remote marketing team
7 Ways To show Kindness with Your Remote Marketing Team

7 ways to show kindness with your remote marketing team

I believe that remote working brings with it improved diversity and a better understanding of other countries and cultures.

But you’ll often hear detractors of marketing teams dispersed geographically talking about how you lose something when staff works remotely. Without the proverbial water cooler to chat by, or after work drinks in the local dive bar, there’s no way that a team can connect. Of course, that’s not true – there are plenty of ways to help your remote crew come together and I should know. I’ve been working remotely for more than 5 years now and sharing my tips on how to work from home.

It is true, though, that it takes extra thought to And kindness can be a big part of helping people feel appreciated. So, here are some tips about encouraging kindness in your dispersed marketing team.

Celebration

In an office environment, birthdays are often a big deal. Traditions vary from place to place, but bringing in cakes for colleagues or going out for drinks are all common happenings. Then there’s weddings, new babies and so on where a card goes around the building in an envelope collecting signatures and small donations that are used to purchase a group gift to support any hobbies they might have.

How do you make that happen remotely? Like most things, it’s possible to do all that from a distance. You need to use the right tools, and to make a little bit more effort.

Something Sweet

There are online bakeries that will send cake anywhere in the world. So, technically it is possible to send out a cupcake or cookie to celebrate an event. But that would also mean sharing home addresses, and generally be a lot more bother and expense than grabbing a box from Krispy Kreme on the way in.

As an alternative, how about asking everyone to have a tasty treat with them at the next daily stand-up? Dedicate the first or last few minutes of the meeting to toasting the birthday girl, or congratulating the new Daddy? Looking at the different baked treats that people bring can be an icebreaker and is a great way to start conversations about different cultures,

Gifts for your remote marketing workers

PayPal, Venmo, Google Wallet…all these and more are ways that you can send money to someone regardless of where they are in the world. When that’s done, where you buy the gift from is your choice. If you plan far ahead enough, most suppliers can get your delivery there on time. If you leave it to the last minute, then it’s probably best left to Amazon to fulfill.

Yes to chitchat

Having a channel that is specifically dedicated to chatting it’s key. If you haven’t already implemented this advice then World Kindness Day seems like a good time to start.

Encourage your staff to use it, to share what’s going on in their lives, big or small. To wish each other good morning, or goodnight, and check in on how they’re doing. Share jokes. Share memes. It all helps to create a positive working environment.

Positive Feedback

Thank you is a powerful word. Appreciating what others have done should be a part of the daily stand up. But sometimes, kindnesses are small and don’t need to be publicly recognised. For times like that, it’s great to have a way your team can express themselves.

There are a few tools that can help with that. Something like the Slack chatbot, HeyTaco! for example. Where users can send each other virtual tacos as a quick and fun thank you gesture for helpful advice. Another idea is the Virtual Kudos Box  or a team awards system that is nominated from within.

Include everyone in the meeting

If you’ve got new staff, give them a thorough onboarding process. Welcoming the new guy is a surefire way to help them integrate into the team, and as well as being kind, that helps boost your productivity. And when you’re chairing a meeting, keep track of who is talking and nudge the reluctant ones to join in. Yes, some of us are more introverted, but we all feel good when we’re asked for our opinion.

No to Gossip

The polar opposite of kindness, is when people start talking behind other’s backs. It doesn’t matter what they’re saying; it’s the divisiveness that’s a problem. Make it clear that you aren’t going to tolerate a culture of moaning. One rule that’s often talked about it, ‘Don’t come to me with a problem unless you have a solution.’

One quote often attributed to Buddha (but actually the work of Victorian poet, Mary Ann Pietzker) is, ‘Before you speak, ask: Is it necessary? Is it kind? Is it true? Does it add to the silence?’ Although the source may be fake news, the sentiment is worth reminding people of, every now and then.

Don’t forget about the Cultural Differences

When your staff work in different countries or come from different cultural backgrounds, there can be bumps in the road to mutual understanding. Literally, for colleagues who don’t share the same first language. But little considerations can be put in place, to smooth the way to understanding.

Firstly, agreeing as a team that you’ll try to avoid using slang and colloquialisms will help avoid a lot of confusion. For technical terms, your team could curate a glossary that can be kept to hand during meetings, saving time on questions. Sending out as much material ahead of the meeting as possible is good, too. It helps those who have a different first language to follow on if they know roughly what subjects are going to come up.

Be Kind

You’ll probably have heard that remote teams are more productive. That’s (mostly) because staff is happier and healthier if they work from home. And do you know what else makes people happy and healthy? You got it! Kindness.

A research study by Harvard Business School & The University of British Columbia gave participants a small sum of money and told them to spend it either on themselves or someone else. Those that spent it on someone else reported that they were happier than those who’d indulged themselves. So it isn’t just the recipient of kindness who gets the warm & fuzzies, it’s the giver too.

In the meantime, in the words of two of the greatest influencers of our time, ‘Be excellent to each other, and party on, dudes.’

Source Prolead brokers usa

about deep learning as subset of machine learning and ai
About Deep learning as subset of machine learning and AI

about deep learning as subset of machine learning and ai

Deep learning has wide application in artificial intelligence and computer vision backed programs. Across the world, machine learning has added more value to a range of tasks using key methodologies of artificial intelligence such as natural language processing, artificial neural networks and mathematical logics. Off lately, deep learning has become central to machine learning algorithms which are required to do highly complex computation and handle gigantic data.

With a multi-layer neural architecture, deep learning has been solving multiple scenarios and presenting solutions that work. There are several deep learning methods which are actively applied in machine learning and AI.

Types of Deep learning methods for AI programs

1. Convolutional Neural Networks (CNNs): CNNs, also known as ConvNets, are multilayer neural networks that are primarily used for image processing and object detection.

2. Long Short Term Memory Networks (LSTMs): Long-term dependencies may be learned and remembered using LSTMs, which are a kind of Recurrent Neural Network (RNN). Speech recognition, music creation, and pharmaceutical development are all common uses for LSTMs.

3. Recurrent Neural Networks (RNNs): Image captioning, time-series analysis, natural-language processing, handwriting identification, and machine translation are all typical uses for RNNs.

4. Generative Adversarial Networks (GANs): GANs are deep learning generative algorithms that generate new data instances that are similar to the training data. GANs aid in the creation of realistic pictures and cartoon characters, as well as the creation of photos of human faces and the rendering of 3D objects.

5. Radial Basis Function Networks (RBFNs): They are used for classification, regression, and time-series prediction and have an input layer, a hidden layer, and an output layer.

6. Multilayer Perceptrons (MLPs): MLPs are a type of feedforward neural network that consists of many layers of perceptrons with activation functions.

7. Self Organizing Maps (SOMs): SOMs enable data visualization by using self-organizing artificial neural networks to decrease the dimensionality of data. SOMs are designed to assist consumers in comprehending this multi-dimensional data.

8. Deep Belief Networks (DBNs): DBNs are generative models with several layers of stochastic, latent variables. For image identification, video recognition, and motion capture data, Deep Belief Networks (DBNs) are employed.

9. Restricted Boltzmann Machines( RBMs): RBMs are stochastic neural networks that can learn from a probability distribution across a collection of inputs.

10. Autoencoders: It’s a sort of feedforward neural network where the input and output are both the same. Autoencoders are utilized in a variety of applications, including drug discovery, popularity prediction, and image processing.

Why does deep learning matter in AI implementation?

Deep learning models have larger and more specific hardware requirements. DL aids Artificial Intelligence (AI) systems achieve outcomes in prediction and classification tasks. Deep learning, a subtype of machine learning, employs artificial neural networks to carry out the machine learning computation. Deep learning enables machines to tackle complicated issues even when they are given a large, unstructured, and interconnected data set.

On the other hand, it’s no secret that AI programs require massive amounts of machine learning to predict accurately. The predictions work accurately if the data set used for training and ML model is well structured and labelled. Hence, the models and results in Ml are more data-intensive than what it is in deep learning.

Training data need in AI implementations

Training data is in focus when we talk about AI programs and implementation. Every artificial intelligence requires supervised or unsupervised learning to understand a given problem. Without the training data, it is unlikely that an AI program will produce any logical results. As a field AI makes use of both unstructured and structured or hybrid training in a variety of formats. Meanwhile, deep learning differs in terms of training data requirements, however, the calculation of the same must be based on layers of computation. Machine learning being dependent on data requires more training data which include both labelled text and image data for coming up with a model.

Summing up, deep learning or machine learning both are dependent on a certain level of data but deep learning can also function with unsupervised data labeling and are more computation-intensive.

Source Prolead brokers usa

Pro Lead Brokers USA | Targeted Sales Leads | Pro Lead Brokers USA Skip to content