Search for:
partner ecosystem way forward for csps in a hyper growth digital world
Partner Ecosystem: Way forward for CSPs in a hyper-growth digital world

Do what you do best and partner for the rest! This fits well with how Telcos are approaching the digitalization wave. Telecom leaders are aware of what value partners can bring in, as CSPs look to expand their value chain and revenue streams by exploring cross-industry business opportunities. In fact, the partnership strategy is not new to Telcos, but it indeed is becoming prominent with the evolving business models as 5G, IoT becomes mainstream. The partner ecosystem enables CSPs to accelerate innovation, increase agility and lower the operating cost by offsetting pressure from traditional services.

Many CSPs have seized partnerships with other industry verticals to capitalize on the 5G promise. Deutsche Telecom (DT) announced a 5G partnership to support the smart industries and accelerate digitalization in the industry. Reliance Jio, the Indian operator, has also transformed itself into a digital service provider by offering an array of services under the JIO brand (see figure 1).

Telia created Division X, a separate business to focus on emerging businesses such as IoT, 5G, and AI by creating a digital ecosystem-enabled platform to monetize joint offerings with partners.

We have seen both horizontal as well as vertical expansion by operators to add more value to the telco value chain. The diversification of services through collaboration and co-creation with the B2B2X business model is the new norm.

So how well are Telcos able to adopt the partner ecosystem strategy? We might have seen some progress, but how can they accelerate this to match the evolving customer needs. A telco needs to define its role in the evolving value chain and ensure a successful transition into the role of an enabler or provider of new-age services.

What Telcos can bring to the table?

Playing to their strength while adopting the digital transition.

CSPs need to constantly keep innovating the service offerings to entice the digitally savvy enterprise and end consumers. They need to start thinking like Google, Amazon, or any webscale organization to embrace an end-to-end digital transformation. The traditional way is not sustainable and demands a reshuffle of the strategic focus and priorities set in the past. Telcos need to move away from connectivity providers’ perception and start leveraging their core competencies such as solid customer base, insights into customer needs, network assets as a platform to digital disruptors, and more.

The way forward strategy: Partner Ecosystem development

Telcos have been working with interconnect partners, roaming, MVNOs, other value-added service providers. But these partnerships are low involvement, with limited monitoring and contract management requirements. The new partnerships are becoming more complex and dynamic with the diversification of services and partners from across industry verticals.

One of the reports by a leading analyst organization revealed that CSPs are already cut out of strategic engagements and solution building with enterprise partners. CSPs are playing a secondary supplier role in 40% of enterprise 5G deals are signed. To capitalize on these opportunities, CSPs need to strengthen their position by creating a robust digital partner ecosystem that can deliver value with their offerings.

Where the telcos will see the most opportunities in near time:

Research shows that key industry verticals such as industrial automation, healthcare, connected cars, intelligent homes represent a $1 trillion opportunity by 2023. Telcos will support a wide variety of use cases with network slicing, edge computing and AI. However, a clear monetization strategy will have to be in place for these new revenue streams.

Healthcare: As there is a radical shift towards digital health services, it has created an opportunity for Telcos to offer telemedicine solutions for remote health monitoring and health management for people with chronic diseases. The low latency and ultra-reliable connectivity will provide accurate feeling and tactile interaction in remote surgical procedures.

Mobility: As the 5G networks roll out across cities and bring together existing wireless networks, they can provide real-time, end-to-end visibility into the transportation systems. Increased fleet visibility will also translate into better safety and reliability for travelers.

Gaming: 63% percent of gamers play with other fellow gamers. In fact, massively multiplayer online games make up the most popular gaming genre globally, but most gamers, especially multiplayer gamers, must deal with lag. By utilizing 5G end-to-end network slicing, operators can create a low latency-focused slice to offer an enhanced gaming experience, while a separate high bandwidth slice can be created for video streamers within the same mobile network.

Smart Industries: 5G will help create a more agile, fully connected, and automated end-to-end manufacturing experience from design to distribution. Supported by the unprecedented levels of AI/ML and automation, the smart industries will make faster decisions and quickly adjust to changes in near real-time.

While the opportunities are immense, Telcos alone cannot deliver the success that 5G promises. CSPs need to establish successful partnerships with digital incumbents and innovative startups on both the technology and service front to deliver the 5G promise to its enterprise and end consumers. One certain thing for success is, CSPs need to bring cooperation and collaboration with partners at the center of value creation.

Subex is hosting a live webinar on “Innovating and Accelerating Growth in the 5G world” that will take a deep dive into the new generation of the partner ecosystem and how it can help CSPs deliver the 5G promises.

Author: Sanajy Bhatt

Source Prolead brokers usa

nfts explained in two pictures the good the bad and the ugly
NFTs Explained in Two Pictures: The Good, The Bad … and The Ugly

  • Non-Fungible Tokens (NFTs) are taking the art world by storm.
  • A large number of serious problems outweigh any positives.
  • Two infographics to explain the process and issues.

The above image shows how an object of value, like an artwork, music file or GIF, can be “minted” and sold via a non-fungible token (NFT). An NFT is much like a certificate of authenticity. But instead of a physical certificate, you own a token: a unique piece of data on a blockchain. NFTs work as public ledgers, recording each transaction associated with the sale of an artwork. When you purchase an NFT, you’re essentially purchasing a tamper-proof digital receipt.

An NFT is not:

  • An artwork—digital or otherwise. The buyer doesn’t actually possess the original item at all. It is permanently stored elsewhere. A secure method is to attach the work permanently to an Ethereum blockchain—containing the work, the unique identifier, and an ownership record. However, it is possible to store the art on a server separate from the NFT.
  • A right to copy, disseminate, or display the artwork [1]. The creator of the artwork usually keeps these rights. The buyer’s only right is that of ownership of an “original copy.” For example, the artwork in the image above is a portrait of Diana Ross I created in 2013. I sold the physical artwork and kept a digital copy. I could sell ownership of the digital image with an NFT (but I’m not going to, because of the environmental concerns outlined below).
  • An exclusive digital version of the artwork. Digital artist Beeple made history earlier this year when Christie’s auction house sold an an NFT consisting of 5,000 of his illustrations for over $69 million. However, he posted each element of the art on Instagram; Anyone can download a free copy, albeit without that prized COA. 

NFTs do have a couple of positives: They provide a solution to tracking digital artwork and verifying ownership. In addition, the technology is enabling artists to make up for lost income due to the pandemic lockdowns. However, the marketplace is suffering from a deluge of criticism for a variety of issues.

The Bad…and The Ugly

At the top of the list: Environmental issues:  There is hot debate about how much energy is used specifically with NFTs. But we do know that its close companion, cryptocurrency, uses more energy than the whole of Denmark (or Argentina). A sale of just six NFTs is estimated to use ten times the energy that an average American uses in a month [2].

Many other issues are plaguing the inchoate technology:

  • Fraud: Forgery is a pervasive problem for physical art collectors, and it has infiltrated the digital market as well. The lack of legislative control adds another layer of risk.
  • Ownership issues: Paying several thousand dollars for a virtual “token” falls into the realm of legal quagmire. From a legal perspective, it isn’t clear who owns what.
  • Prohibitive costs for artists: Artists can lose money due to “gas” and other fees associated with selling on Ethereum, even with minimum prices in the hundreds of dollars range. Cryptocurrency fees can be so unpredictable and difficult to comprehend that some artists are losing before they even post a work for sale. One artist reported on Reddit that “Fees out the behind” for money transfers caused him to lose $45 before he could even list his artwork [3].
  • The bubble is about to burst. The NFT marketplace has also been dismissed by many market professionals as a “collector” bubble. Remember the Beanie Babies craze? Decades ago, these small cloth toys traded for thousands of dollars. Most are now worthless. James Surowiecki, a columnist for The Slate and The New Yorker, states that investing in collectibles is “far more lucrative when you get on it early”– and that time has passed. “There’s the very real possibility that the whole thing will crash,” he says [4].
  • Tech Issues: NFTs are contributing to a global silicon chip shortage [5]. In addition, some buyers of NFTs aren’t aware of where their art is digitally stored. If it’s on a private server that crashes, the token will become worthless.

It’s doubtful that so many resources should be used for something that adds dubious value to the human experience.  Until the serious problems with NFTs are fixed, visit a local art gallery and support your local artist. 

References

[1] MCN Insights: NFTs are a scam. 

[2] NFTs are not just bad for the environment, they are also stupid

[3] Lost $50 today trying to make NFT Art

[4] Fiat Lux News

[5] The paradox of NFTs: What are people actually paying for?

Source Prolead brokers usa

digital twins bringing artificial intelligence to engineering
Digital Twins: Bringing artificial intelligence to Engineering

Digital Twins are increasing in usage but are often used in multiple contexts and in a simplified manner. Most references to the Digital Twin actually refer to a Digital shadow i.e. maintaining a digital copy of a physical object that is updated periodically. In a more complete sense, the Digital Twin concept relates to simulation and interaction of complex, multiple physical objects in a digital environment (typically for Engineering and Construction)

I am interested in the idea of Digital Twin because my teaching at the #universityofoxford applies more to AI in engineering (as opposed to say financial services).

Also, Digital Twins relate to the idea of Physics based modelling in Engineering. A wind tunnel is an example of Physics based model. Hence, one could think of a corresponding digital entity to the physical model which simulates the behavior of the model in a digital sense.

For this reason, digital twins are one of the best conceptual mechanisms for incorporating artificial intelligence into large-scale, dynamic engineering problems – especially considering existing ideas of physics-based modelling in engineering.

Digital twin technology is already used in various industrial sectors such as aerospace, infrastructure and automotive.

A paper I recently read talks about how Digital twins can be implemented through surrogate modeling.

The paper uses a discrete damped dynamic system to explore the concept of a digital twin.

An image of this idea is as below

Image source

The paper uses Gaussian process (GP) emulator within the digital twin technology is explored. GP has the inherent capability of addressing noisy and sparse data.

GP is a probabilistic machine learning technique that attempts to infer a distribution and then use that distribution to predict unknown points.

GP has two distinct advantages over other surrogate models:

  • GP is a probabilistic surrogate model, it is resistant to overfitting.
  • GP can measure the uncertainty which can then be used in the decision-making process

Additional notes from the paper

  • GP not only model and also the example (spring) is a relatively simple one for explanation
  • As IoT proliferates, digital twins would get more complex based on increasing data being reflected in the virtual world from the physical world
  • Digital twins / surrogate modelling approach suits dynamically evolving systems
  • Typically, the digital twin starts from an ‘initial model’ which is often a physics-based model.
  • Over time, as more and more components can be modelled virtually, digital twins of larger (composite) objects would become the norm ex aircraft, automobiles etc

Paper link:

The role of surrogate models in the development of digital twins of…

Source Prolead brokers usa

the danger of making decisions based upon averages
The Danger of Making Decisions based upon Averages

“If you make decisions based upon averages, at best, you’ll get average results”

During the 1950s[1], United States Air Force pilots were having trouble controlling their planes. The problem turned out to be the cockpit, or more specifically, the fact that the cockpit had just one design: one designed for the 1920’s average pilot. The Air Force concluded that they simply needed to update their measurement of the average pilot, adjust the cockpit accordingly, and the pilot handling troubles would go away.

With the help of Lieutenant Gilbert Daniels, the Air Force measured more than 4,000 pilots across 10 size dimensions.  The air force had assumed that the vast majority of pilots would fall within average across the 10 dimensions. In reality, none – none – fell within average across the 10 dimensions; that is, out of 4,000 pilots, zero of them were “average” (see Figure 1).

Figure 1:  The Danger of Making Decisions Based Upon Averages

The Air Force’s “aha” moment?

If the cockpit was designed for the average pilot, it was actually designed for no pilot

Todd Rose came up with the Jaggedness Principle of individuality. The Jaggedness Principle asserts that measuring a collection of traits across a sufficiently large number of individuals, roughly half of individuals will be above average, and roughly half will be below average for any particular trait.  And that across all the traits, few (if anyone) will actually be “average” (notice the “jagged” line for each individual in Figure 2).

Figure 2:  Source: https://publicism.info/business/average/5.html

Since no one is “average”, why do organizations continue to make decisions based upon averages?  We have spent much of our university education and professional career being taught to make decisions based upon averages – average churn rate, average click rate, average market basket size, average mortality rates, average COVID19 infection and death rates.  And maybe when your data analytics tool of choice was a spreadsheet, then the best we could do was using averages to make overly generalized policy and operational decisions.  But the world is changing… and changing rapidly!

Unfortunately, averages don’t provide the level of granularity necessary to make precision decisions that drive the optimization of the organization’s key business and operational use cases. “On average” is not how successful companies will survive in a world of continuous transformation.

Fortunately, Big Data, Data Science, Analytic Profiles, and Nanoeconomics provide the foundation for changing the organization’s decision-making frame. It’s time for organizations – and management teams – to “Cross the Analytics Chasm” from making overly-generalized operational and policy decisions based upon averages, to making granular, precision decisions using big data and data science (see Figure 3).

Figure 3: Crossing the Analytics Chasm with Nanoeconomics

Some critical concepts for crossing the Analytics Chasm include:

  • Nanoeconomics. Nanoeconomics is the important concept guiding organizations across the Analytics Chasm.  Nanoeconomics is economic theory of individual entity (asset) predicted propensities, whether the entity (asset) be human (doctor, nurse, technician, operator, teacher) or device (wind turbine, automobile, chiller, compressor).  See Figure 4.

Figure 4: Nanoeconomics is economic theory of individual entity predicted propensities

Nanoeconomics is based upon identifying and codifying individual asset (human or device) predictable propensities, tendencies, patterns, and relationships.  And from those predicted propensities, organizations can make informed, precision decisions that seek to optimize the organization’s key business and operational use cases.

  • Analytic Profiles. Those predictive propensities are captured in Analytic Profiles (or asset models) that facilitates the application of those customer, product, and operational propensities against the organization’s key business and operational use cases (see Figure 5).

Figure 5Analytic Profiles

See the blog “Analytic Profiles: Key to Data Monetization” for more details on the concept of Analytic Profiles.

“If you make decisions based upon averages, at best, you’ll get average results”

Crossing the Analytics Chasm requires a mind shift in how organization’s make decisions. Making decisions based “on average” is not how successful companies will survive in the age of digital economic transformation.  Organizations need to embrace the power of nanoeconomics – the economics of individual entity (human or device asset) predicted propensities.

Organizations can couple the concept of nanoeconomics with Analytic Profiles to leap across the Analytics Chasm in transitioning from decisions based on averages, to decisions based upon predicted propensities.  And as a result, these organizations can become more effective at leveraging data and analytics to power their business and operational models (see Figure 6). 

Figure 6: The Big Data Business Model Maturity Index

The valuable data and analytic concepts mastered to cross the Analytics Chasm – nanoeconomics and analytic profiles – positions the organization to exploit the economic potential of data and analytics and transverse the Big Data Business Mode Maturity Index to re-invest business and operational processes, dis-intermediate customer relationships, and transform industry value creation processes.

But ya gotta start by getting over that darn Analytics Chasm…

[1] Story taken from the Harvard Graduate School of Education article “Beyond Average

Source Prolead brokers usa

how the data warehouse can stand between your data and your insights
How the data warehouse can stand between your data and your insights

You have a product that has taken off. Your daily active users metric has been growing exponentially. The number of events per day you’re logging is now in the 100’s of millions.

As a result you now find yourself with terabytes of data or if you have become really successful hundreds of terabytes.

You begin to wonder if you could use all of this data to improve your business. Maybe you can use the data to create a more personalized experience for the users of your product. Or maybe you can use the data to discover demand for new products.

You request that your data team come up with way to leverage this data to do just these types of things.

The data team that you have hired recommends that you develop a data pipeline. An end-point of that pipeline being the data warehouse.

You may get something like this:

Data Pipeline and Data Warehouse
Data Pipeline and Data Warehouse

But after months of work, and many dollars spent building the data warehouse, the data scientists that you hired can’t come up with the insights.

How could all of that data, all of those IT consulting hours, and those cloud computing resources be marshalled to not produce the insights?

The problem likely lies in one of the important components of your pipeline: the data warehouse

Here are some of the painful things you can experience in the data warehouse:

  • Poor Quality Data
  • Data that is Hard to Understand
  • Inaccurate / Untested Data
  • A Slow Data Warehouse
  • A Poorly Designed Data Warehouse
  • A Data Warehouse that Costs Too Much
  • A Data Warehouse that Does Not Factor in Privacy Requirements

Poor Quality Data

You data may be streaming in from multiple sources. When an analyst runs a JOIN on this data, it could result in a table that is inconsistent. Inconsistent data can manifest itself as missing columns that are required to properly identify each data item. Or the data may contain duplicates that take extra space and prevent from performing the aggregations necessary to achieve insights without extra work (meaning extra analyst time cleaning the data via interpolation, and extra compute hours deduplicating the data).

Data that is Hard to Understand

You have PhD’s on your analyst team. Why are they scratching their heads and shrugging their shoulders after looking at your data? It could be that the tables in the data warehouse are an enigma.

A lot of times, the data warehouse is built by a different team than the analysts. Both groups are trying to manage data but are not necessarily playing for the same data team.

Oftentimes the tables are created in a way that makes it easy to create the table and but not easy to be processed downstream. The table is created without taking the downstream requirements into consideration! Noone thought to begin the data warehouse design with the end goal in mind of quickly enabling insight generation.

Inaccurate / Untested Data

Data items can be wrong. Data items may reflect something that is not possible. The data may reflect something going on in society that you do not want to serve as a basis for downstream analysis. The data must be accurate otherwise, it will lead your analysis to wrong or detrimental insights. Untested data is worse than not having any data.

A Slow Data Warehouse

A data warehouse can be of no use because it takes too long to query, or goes down often. If users are not trained on how to write efficient queries or if the warehouse is not developed to automatically scale with the growth of the data, and if there are no protections in place to prevent abuse of the compute resources of the warehouse your insights will never materialize.

Poorly Designed Data Warehouse

Business leaders who launch a data warehouse without first considering the business needs and translating these into actionable tasks will likely get a data warehouse that does not meet their business needs.

Not understanding these business needs upfront leads to miscommunication amongst the analysts, which leads to confused insights.

A Data Warehouse that Costs Too Much

One possible cause of a costly warehouse is not matching the right warehouse implementation option to your needs. Not every organization needs to create a from-scratch, on-premise, data warehouse. Doing this takes a lot of time, a lot of the right human resources, and equipment. This can yield a project that is late, over budget, and expensive to maintain or upgrade. As a result over time your warehouse becomes less useful as other priorities consume the organization’s resources.

A Data Warehouse that Does Not Factor in Privacy Requirements

Even if your product is a game, or something purely consumer oriented, and even if you spell out clearly in the terms of service that whatever data the user shares is yours, you still can’t ignore how the data warehouse will protect your user’s identifiable information.

Not taking this into consideration can result in people in the company being able to look up specific users for non-business purposes. It can result in people in the company misusing personally identifiable information, which can hurt your users, and negatively impact daily active user growth. It can result in personally identifiable information inadvertently leaking somewhere downstream.

How to Deal?

There is no magic bullet to addressing these many issues. While some of these issues are technical in nature (and just require the right no-how), others are organizational–meaning you can’t just download a free-ware tool to solve them.

But briefly, some of these issues can be addressed by:

  • Have a well organized product development process. Using agile
  • Having a well thought out product life cycle process and organized as cross-functional teams can work well
  • Realize that there is no one-size fits all data warehouse. You will have to some warehouses that are configured to be high-speed data stores to capture data streaming in from your product. These are data warehouses that are configured to prioritize transactional activity. Other data warehouses will be configured to be always-on, highly-available, scalable, and reliable data stores whose purpose is to hold your 100s of terabytes of data in a queryable form to enable the data analysts.

Source Prolead brokers usa

what makes power bi the most powerful data visualization tool
What Makes Power BI the Most Powerful Data Visualization Tool

Nowadays, businesses have to rely on data in unprecedented ways. In fact, businesses hailing from various disciplines use massive amounts of data on a daily basis. They gather data from several sources, offline and online. However, it is also important to compile and process that data and analyze it using apt software solutions. That is why Data visualization applications are used by so many companies. From technology giants to leading MNCs, plenty of companies are relying on BI and data visualization solutions like Microsoft Power BI. For effective Power BI implementation, hiring a veteran development agency is recommended. 

Understanding the Importance of Data visualization

Before you invest in a specialized data visualization tool like Power BI or buy it, it is necessary that you know the significance of data visualization. Your business may obtain data from myriads of sources. Analysis of that data helps in understanding customer preferences, areas of improvement and market trends, etc. This, in turn, helps the businesses take key decisions and make strategic moves. To understand the analyzed data properly, presenting it in a comprehensible visual manner is necessary. That is where data visualization steps in. 

Microsoft Power BI as a data visualization tool

Microsoft Power BI is a BI solution that has robust and embedded data visualization capabilities. The data compiled and analyzed by the tool is visually represented using several elements. These include graphs, videos, charts, images, etc. These visual elements help the users and viewers understand the data well. It is possible to use many filters or parameters to represent data in specific ways. Dashboards and reports are also key features present in the application. Of course, you will gain from the services of a veteran Power BI developer for utilizing these elements.

The elements in Power BI used for data visualization

As it is, Power BI comes with a wide range of data visualization elements. These include:

Charts of varying types

  • Area Charts– It is also referred to as a layered area chart. It is used to indicate a change in one or several quantities over time. Area charts should be used when the user wants to display and see any variable’s trend over time. For example, it can be deployed to get a glimpse at the workforce productivity in various quarters. You may also use it to analyze the sales and expenditure of the company quarter-wise.
  • Line chart– A line chart is one of the widely used visual elements in Power BI. It is useful when you want to visually represent trends over time. The data points are joined by a straight line horizontally. For example, it can be used to represent the sales figure of a company in a financial year.
  • Bar Charts– These charts are used to represent categorical data through horizontal bars. This is used a lot on Power BI as they are easy to comprehend. This chart can be used to represent the growth rate of various departments in a company, per quarter, for example. 
  • Combo chart– A Combo chart blends a column chart and line chart. They can be useful when you want to compare several measures with varying value ranges. They can be used to illustrate the association between two measures in a single visualization.
  • Doughnut charts– A doughnut chart is much like a pie chart, and it is used when it is necessary to display the relationship of a section to a whole. However, users need to remember that doughnut chart values should add up to 100%. Using too many categories in a doughnut chart makes it hard to read. 
  • Funnel charts– Funnel charts are used when it is necessary to illustrate sequential connected stages in any process. It is used widely to show sales processes. Each funnel stage denotes a certain percentage of the total amount. A funnel chart resembles a funnel, with the first stage being the biggest in size. 
  • Pie charts– Pie chart is somewhat like a donut chart, and the combination of all segments must add up to 100%. The data is segregated into slices, and it is useful for representing the similar category of data. 
  • Gauge charts– A gauge chart may remind you of the speedometer used in regular car dashboards. In it, a needle is used for data reading. 

There are some other types of charts available in Power BI, like the waterfall chart. However, these are typically used by the Power Bi Experts.

Maps

In Power BI, you can make use of maps to represent sales data. This is accessible through the globe icon in the tool’s visualization pane. You have to pick the required categories. 

There are three types of maps, namely Flow maps, point maps, and regional maps.

R and Python for data visualization

Microsoft has made it possible to use R and Python to enhance the data visualization prowess of Power BI. This can be immensely helpful for the end-users who want their reports to be as information-rich and visually enticing as possible. 

R is a language that is used extensively for graphics and statistical computing. For that, it is necessary to have R studio and necessary packages and libraries in place. R provides a robust platform for data visualization and analysis. In fact, with it, you can visualize data prior to the analysis. 

Python is another programming language that can be used with Power BI. It is necessary to set up Python with the necessary libraries and packages in the system. Python, in fact, has been used for years for data visualization needs. However, it lacks robust chart generating options, which can be achieved by integrating it with Power BI. 

It is hard to locate another BI and data visualization tool that is enriched with so many visual elements like Power BI. After you equip the dashboard with various visual elements and feel happy with the visual representation of data, you can publish it. Based on the version of Power BI you have, it is possible to share reports that can be seen only by other Power BI users and those who do not use the platform. 

Data visualization tips for Power BI users

As it is, Power BI is laden with so many visual elements that using them in the right way can be tedious for some users. This may be tougher for those who are new to the platform. Listed here are some effective tips for extracting the most out of data visualization features in Power BI. 

  • Before using any visual element such as a type of chart or map, think of the purpose and type of data to be represented. 
  • Do not clutter the dashboard using too many visual elements at a time. Also, customize the charts with an apt color and label for making these easy to comprehend. 
  • You can also add visual elements in Power BI that you may have used in the MS Office suite. These include shapes, text boxes, and images. After adding, you can resize these elements as well. 

Summing up

As you can see, Power BI is a powerful data visualization tool, and you can use many of its embedded visual elements to showcase your data effectively. However, it is also necessary that you pick the visual elements cautiously and evade overdoing things. You may also seek services of the Power BI development services for creating killer Power BI reports. 

Source Prolead brokers usa

tech driven transformation of the legal sector
Tech-Driven Transformation of the Legal Sector

Legal Tech refers to the technology used in the legal sector. It has significantly transformed how attorneys and other legal professionals perform their duties. Moreover, it has brought a lot of opportunities for law offices (solo legal practices, law firms, and corporate/government legal departments) by digitally transforming legal operations, helping them meet client demands efficiently and timely.

 More interestingly, the scope and adoption of technology are not limited to top legal organizations. Small-size law firms and even legal startups have also invested in technology, taking advantage of the opportunities it offers.

4 Ways Technology is Transforming the Legal Sector:

Advanced technologies simplify lawyers’ work and improve legal services’ quality, all while reducing operational costs. Here’s how.

Bridging Communication Gaps between Lawyers & Clients

Lawyers can now collaborate more effectively with their teams and establish a flexible, more secure medium to communicate with clients by utilizing unified communication tools. This can result in enhanced productivity and client satisfaction.

The Era of Automated eDiscovery

Searching for documents and highlighting or tagging relevant evidence pieces are parts of case preparation that consume a lot of time. Nowadays, most of the paperwork is digital; here, eDiscovery automation software (powered with advanced analytics) can automatically find and tag keywords and key phrases and eliminate irrelevant documents, helping attorneys speed up the entire process.

Case Management Becoming Easier

Many software on the market enable attorneys to manage different case management functions using one platform. For instance, schedule preparation, contact-list organization, document management, billing data entry, etc., are now easier to manage. Besides, any cloud-based case management software allows attorneys to store all the case-relevant data (helpful information) in a centralized location and access it anytime from anywhere. This is even more beneficial for those working remotely.

The Rise Attorneys’ Online Communities

 

By coming together in large numbers, attorneys create community groups, most often to help people who don’t have access to professional legal advice and counseling. Another motive is to include law students and solo attorneys in community groups and discuss several legal profession-related topics, such as issues, trends, news, etc.

Social media sites and apps, especially Twitter and LinkedIn, are becoming more popular as a forum platform for attorneys to connect with other legal professionals and establish a strong network in the industry.

 

Challenges Brought by Technology for the Legal World

Undoubtedly, technology is making an attorney’s job and life more manageable. However, it also brings along various challenges; let’s discuss a few.

The Sudden Knock on The Door

A few years back, the sudden occurrence of the technology revolution has left many attorneys (mostly veterans) baffled in their profession. Typical legal processes such as research, documentation, case preparation, etc., used to be managed manually. However, with digitization, many legal tasks are handled by automation-powered software and tools, arising several challenges for attorneys.

Lack of Technical Knowledge & Expertise

Many legal professionals are still unable to understand which technologies are best for their practice and how to make the best use of them to obtain favorable results. Due to this, many law firms often choose legal process management services provided by external firms having skilled legal professionals with all required technical knowledge and capabilities.

Issues on Organizational Level

Due to the rapid and intense emergence of technology in legal, lawyers and law firms need a massive operational overhaul, transforming several processes. From lead generation to revenue recognition, everything needs to be changed now. Consequently, law firms now have to deal with challenging situations such as determining the usage scope of legal tech, developing new business models, establishing policies, etc.

Financial Restraints

The cost of technology adoption and maintenance put a question on many law firms’ budgets, forcing them to think twice before making any tech investment. Many legal organizations overlook the advantages of technology for their practice because of financial problems.

Nowadays, individuals or businesses in legal need would prefer choosing a tech-driven legal services firm. Besides attracting more potential clients, here are some other benefits of the continuously growing legal tech.

Benefits of Legal Tech

 

Reduction in Manual Effort

Legal tech, for example, data processing, document management, eDiscovery software, etc., automatically manages these processes, allowing lawyers to free up a significant amount of time. As a result, they can utilize this time to communicate with clients and prepare case files for the effective representation of clients in the courtroom.

Research Work becomes Easy

Legal research tools assist lawyers in becoming updated about different rules and regulations. Such software can also identify relevant documents on the internet, shortlist them, and even highlight key phrases that can be helpful for an attorney to make their argument stronger.

Better Management of Resources

Various applications for title management and calendaring allow attorneys to efficiently manage work related to titles and get valuable insights regarding all tasks scheduled for a particular workday. This enables senior attorneys to utilize resources (paralegals or other clerical staff members) more effectively, bringing better results.

Minimized Risk of Mistakes/Errors

Tech-driven data entry and management solutions restrict access to sensitive and confidential information a law firm holds, for instance, clients’ case details. Besides, integrating such tools with analytics can help make better use of the available data.

Enhanced Transparency

With the help of reliable law practice management software, law firms can better control their processes and eliminate workflow issues. These software record real-time information (for example, how much time a paralegal has to spend on a particular client’s case) in a centralized, secure location. This data can then be used for billing and analyze staff performance, productivity, and much more.

Excellent Customer Experience

Using AI-powered legal software, attorneys can send personalized emails to clients. Also, by collecting client data, such software allow law firms understand their clients’ need better, helping them meet their demands well in time. Thus, legal tech can help law firms enhance clients’ experience by providing highly customized legal services.

 

Conclusion

Solo attorneys, law firms (of all sizes), and corporate/government legal departments must become aware of the technology they can use to improve legal operations. Since many legal processes are shifting toward digital platforms, they need to catch up with legal tech trends and adopt all easily applicable tools. It is time not to be afraid of being replaced by that might be developed in the near future; instead, it is time to gain knowledge, improve technical skills, and utilize technology for the betterment of legal functions, processes, and the overall growth of legal business at large.

Source Prolead brokers usa

the secret behind train and test split in machine learning process
The Secret behind Train and Test Split in Machine Learning Process

What is Data Science and Machine Learning?

 Data Science

  • Data Science is a broader concept and multidisciplinary.
  • Data science is a general process and method that analyze and manipulate data.
  • Data science enables to find the insight and appropriate information from given data.
  • Data Science creating an opportunity to use data for making key decisions in different business domains and technology.
  • Data science provides a vast and robust way of visualization techniques to under the data insights.

     Machine Learning

  • Machine learning fits within data science.
  • Machine learning uses various techniques and algorithms.
  • Machine learning is a highly iterative process.
  • Machine Learning algorithms are trained over instances.
  • Machine Models are learned from past experiences and also analyze the historical data.
  • Machine Model able to identify patterns in order to make predictions about the future of the given data.

“The main difference between the two is that data science as a broader term not only focuses on algorithms and statistics but also takes care of the entire data processing methodology”

Let’s see quickly the Machine Learning Process – Overview and jump into Train and Test.

Understand the scenario

        Certainly, you can assume how the students are getting trained before their board exams by the great teachers in School/College.

         At School/College level we use to undergo many more Unit-test/Term exams/Revision exams/Surprise tests and etc., Here we have been trained on various combinations of questions, mix and match patterns.

        Hope you all come across these situations many times in your studies. No exceptional data set that we’re going to use in Data Science. All because we need to build a very strong model before we go into deploy the model in a production environment.

       Similarly, in the Data Science domain, the Model has been trained by the sample data and makes them predicts the values with the available data set after data wrangling, cleansing, and EDA process, before deploying into the production environment, before the model meets the real-time/streaming data.

       This process is always helping us to understand the insight of the data and what/which model we could use for our data set to address the business problems.

       Here we must take care of the data set and it should match with real-time/streaming data feed (To align with all combinations), while the model performing in a production environment. So, the choice of data set (data preparation) is really key before the T&T process. Otherwise, the model situation becomes pathetic… as below in the picture. There might be huge effort loss, impact on the project cost and end up with unhappy customer service.

Here you should ask me the below questions.

  • Why do you split data into Training and Test Sets?
  • What is a good train test split?
  • How do you split data into training and testing?
  • What are training and testing accuracy?
  • How do you split data into train and test in Python?
  • What are X_train and Y_train X_test and Y_test?
  • Is the train test split random?
  • What is the difference between the training set and the test set?

Let me answer one-by-one here for your benefit to understand better way!

How do you split data into training and testing?

80/20 is a good starting point, giving a balance between comprehensiveness and utility, though this can be adjusted upwards or downwards based upon your model performance and volume of the data.

  • Training data is the data set on which, you train the model.
  • Train data from which the model has learned the experiences.
  • Training sets are used to fit and tune your models.
  • Test data is the data that is used to check if the model has learned well enough from the experiences it got in the train data set.
  • Test sets are “unseen” data to evaluate your models.

Architecture view of Test & Train process

CODE to split give dataset

# split our data into training and testing data
X_train,X_test,y_train,y_test = train_test_split(X_scaled,y,test_size=.25,random_state=0)

What are training and testing accuracy?

  1. Training accuracy is usually the accuracy we get if we apply the model to the training data
  2. Testing accuracy is the accuracy of the testing data.

It is useful to compare these to identify how Training and Test set doing during the Machine Learning process.

Code

model = LinearRegression() # initialize the LinearRegression model
model.fit(X_train,y_train) # we fit the model with the training data

linear_pred = model.predict(X_test) # make prediction with the fitted model

# score the model on the train set
print(‘Train score: {}\n’.format(model.score(X_train,y_train)))
# score the model on the test set
print(‘Test score: {}\n’.format(model.score(X_test,y_test)))
# calculate the overall accuracy of the model
print(‘Overall model accuracy: {}\n’.format(r2_score(y_test,linear_pred)))
# compute the mean squared error of the model

print(‘Mean Squared Error: {}’.format(mean_squared_error(y_test,linear_pred)))

Output

Train score: 0.7553135661809438

Test score: 0.7271939488775568

Overall model accuracy: 0.7271939488775568

Mean Squared Error: 17.432820262005084

What are X_train and Y_train X_test and Y_test?

  • X_train — This includes your all independent variables, (Will share detailed notes on independent and dependent variables) these will be used to train the model.
  • X_test — This is the remaining portion of the independent variables from the data which will not be used in the training set. Mainly used to make predictions to test the accuracy of the model.
  • y_train — This is your dependent variable that needs to be predicted by the model, this includes category labels against your independent variables X.
  • y_test — This is the remaining portion of the dependent variable. these labels will be used to test the accuracy between actual and predicted categories.

NOTE: We need to specify our dependent and Independent variables, before training/fitting the model. Identifying those variables is a big challenge and it should come out from the business problem statement, what we are going to address.

Is the train test split random?

The importance of the random split has been explained in the below picture clearly in a simple way! You could understand from pictorial representation!

In simple text, the model could understand what all data combination are is exists in the give data set.

The random_state parameter is used for initializing the internal random number generator, which will decide the splitting of data into train and test.

Let say! random_state=40, then you will always get the same output the first time you make the split. This would be very useful if you want reproducible results to finalize the model. from the below picture you could understand better why we prefer “RAMDOM Sampling”

Thanks for your time in reading this article! Hope! You all got an idea of Train and Test Split in the ML process.

Will get back to you with a nice topic shortly! Until then bye! See you all soon – Shanthababu

Source Prolead brokers usa

10 email marketing tools for you to consider
10 Email Marketing Tools For You To Consider

Email Marketing can be challenging. I learnt this lesson from my experience in the digital marketing sphere and being a support representative at an email software company. Why? There are a number of reasons. They come in different forms and from various places and refer to segmenting an audience, finding contacts, designing a perfect subject line, to name a few.

Such activities require from marketers tons of creativity, consistency and research. Yeap, email marketing is still one of the most efficient marketing channels due to ROI. This fact only fuels the competition in the industry, leading to seeking new solutions. 

Notably, email campaign software has become the go-to option for many brands and businesses. Automation interferes in many spheres and enterprises, while digital marketing is not an exception. Email campaign software makes a difference there. 

However, how many email platforms are there? A lot. I have been working in digital marketing for some time and understand why one can find very confusing the amount of software available before marketing teams. 

That’s why I have designed a list of the top email marketing software that can add to your small business, startup, or long-term campaign. This post will be helpful for those who have doubts about which email marketing to use or have just started a journey into the marketing world.

Top Email Marketing Services

Before all, the automation tools I am listing in this post are different and answer to similar needs of a marketer. Some of them are all-in-one solutions; others aim to facilitate a specific issue. Interestingly, you can combine one tool with another. 

How to choose the best marketing software? Pick the one that will help your business needs or goal. The right email marketing tools are about answering the challenges. What are some that marketers consider crucial? Scheduling, organization, personalization, segmenting and data collection. Each of them is equally important for the open and click rates within lead generation. 

At the same time, many of you have struggled with email templates; there are tools for it as well. Among other things, the platforms help to track results and report on valuable data. All in all, it is what a reliable marketing tool is to be expected of. 

Let’s look at the options that can help you with the email marketing objectives. 

1. Constant Contact 

Constant Contract is at the beginning of the list as it has a specific focus on email marketing and has been long enough in business. Despite the idea that I had used it only for a while, many colleagues of mine refer to it as an excellent solution for small business. Why is it good? 

First of all, it puts simplicity and accessibility in email campaign designation. For instance, the particular platform offers the management of emails, sending schedule, and content. It refers to template and newsletter creation, together with the insertion of CTA buttons. Importantly, it has integrations with Shopify underlining its usability for small businesses. 

Also, it offers email list management and segmenting for better targeting. In the end, it is used by many small companies to generate leads. However, what I heard is that their users wish they paid less for the simplicity the particular platform offers.

2. GetProspect 

Have you ever struggled with your email list enrichment? I bet you are. GetProspect email finder may be a solution with its simple interface, easy-to-use functions, and extracting possibilities. I have worked at this company for some time and must say it does a pretty good job in what they offer. What exactly is it, and what value does it provide to your business? 

Well, small businesses usually struggle with getting contacts of their target audience. If you are a b2b service, they may be business owners, CMOs or CEOs of firms. If you are a marketer or SEO specialist, they may be influencers or bloggers. Lastly, if you already have an extensive database, you may need to verify it. GetProspect has these functions. With it, you can extract the emails from Linkedin or any corporate website.

It’s not the only email finder on the market. Still, it can be integrated into other CRMs by Zappier and has a very minimalistic design. Thus, you can extract your groups of contacts, transfer them to the greater platform and produce the campaign you want. 

Many of its users say that that simplicity and straightforward solution to email enrichment captivate them.

3. Mailchimp

You probably have heard of this marketing tool. It is one of the leaders for a reason. If I haven’t mentioned this in my post, it would be a mistake. Why is it good? There is a free package, providing valuable functions, while paid options are to bring even more. 

I used Mailchimp for its easy-to-use tracking and email building. Particularly, it has the drag-and-drop feature, which can help a lot if you are new to email design.

Simultaneously, Mailchimp can be handy in segmenting audiences. I had to use it on my first marketing assignments and was very glad it had a drag-and-drop function. Making discount coupons and give away campaigns required much less time, thanks to a large collection of templates.

However, looking back, I can say it has basic analytics and segmentation, while for the advanced ones, the user should pay. Notably, a friend of mine had some issues with the support department and their responses. Bad luck, possibly.

Lastly, integration capabilities with other platforms can significantly add to the user’s experience, though. It will be a great choice if you are supposed to level your email creation before entering a larger market and nurturing more leads.

4. Hubspot

HubSpot is another popular solution that many businesses use. The pros of this email marketing software lie in its universal nature. The particular software offers an all-in-one automation solution for many marketing platforms. However, it as well as a separate email marketing tool that is free. 

Similar to Mailchimp, it provides assistance in preparing visual materials and producing the body of emails. Some of my colleagues did like the interface and the follow-up sequences upon purchasing via websites. However, as it is a free tool, though, by a recognized company, it has some limitations, while the full version can be costly for small firms.

I would be using it if I have plans of enlarging my business, where email won’t be the crucial part of my marketing activity but add to the social media strategy. At the same time, it would be great if you are trying and experimenting with email marketing or considering unifying all of your channels under one CRM system. Then, Hubspot will be the perfect solution. 

5. Sendinblue

Sendinblue has made it to this list due to its surprising features, considering the time we live in. Who sends SMS messages today when we have messengers? However, the particular tool does! It as well facilitates email campaigns management, having automation and personalization possibilities. In short, it is excellent for transactional messages sending. I had my team use it for one event project, and it did great.

Simultaneously, the template options are not as advanced as the top marketing email services above provide. Thus, choosing this option would be suitable for those who have their template game on an adequate level. That is one downturn among some other ones. 

They refer to a limited free package and multiple logins only under advanced packages.

Still, it is affordable and should be a good choice if it suits your goal and strategy.

6. Sender

In regard to this email marketing software, you may want to use it if you pursue your deliverability improvement. The algorithms behind Sender focus on tracking delivery rates. At the same time, there is a facilitator for template creation. One can add different visuals that will for sure optimize the engagement rates of the campaign. The service pays attention to details making your email marketing campaign bright. 

Still, I heard that they had some lags within their segmentation feature, which the company is likely to have taken care of. Why? Their customer support is friendly and lends a helping hand irrespective of the issue’s complexity, despite that the pricing is relatively low

7. Drip

You may think that this tool can be helpful in drip campaigns. This mailing campaign software has a powerful segmenting focus and synchronizes with many website constructors

Such a combination makes Drip useful for many entrepreneurs or small business owners that conduct their business online. In addition, they have a bunch of personalization features. That’s why many consider it ideal for firms with small operations in specific niches.

One of the cons is that it can be a bit pricey. Yet, it offers some educational materials for users. Again, the data analytics, targeting features, and personalization within this email automation service can become a game-changer for an owner of a small firm.

8. Convertkit

Convertkit is another email marketing tool that is handy in email campaign designation. As Drip or Mailchimp, it is excellent for segmenting the audience. However, compared to them, this service offers it through tagging. Some colleagues of mine have said that it is easier to have different groups and target them by tags at your display, especially if there is only one product of yours.

On the other hand, the particular instrument can be challenging to use at first. You may need some time to comprehend all the functions. This happened to me, and I decided to go for another solution. Still, if you want to enhance your lead generation funnel, this can work.

9. Aweber

Aweber is a traditional and straightforward mailing campaign software that was designed solely for email marketing. It has both advanced and drag-and-drop features for template creation. Besides, as it is a long time on the market, it has an extensive knowledge base and support.

Moreover, it has all the standard features referring to personalization, follow-up automation, listing and segmentation. Notably, what is the most important thing is its simplicity. 

I believe I have started my email marketing journey with this tool, and for me, as a newbie in marketing, it was pretty easy to use. That’s why it can be a universal tool for tiny companies who just start selling their product and have not developed large lists yet.

10. Omnisend

Omnisend can be a great choice if you are developing your business on several channels. Although it has a basic set of features, it has SMS automation features and can work with numerous platforms. 

You can have different campaigns, while the Omnisend reporting system will show from where you got the revenue. It is essential for prioritizing the campaigns and offers for the customer groups.

Except for simplicity in management, automation and the beautiful design of templates, it can offer affordable packages. Suppose a person needs something for a small business related to visually pleasing products, like jewellery or craft. In that case, they are likely to benefit from the templates of this email campaign software. 

Lastly, if you want something that would better align with other strategies or website designing, another option can be a better solution for you.

Bottom Line

There are many email marketing software, and picking the right one depends on your goal and your business. You may need an email marketing tool solely for email campaigns or contact research. The best is the one that is the most efficient. I have made this list due to what I experienced and heard from my colleagues.

When choosing the best tool, look at what challenges you have or how a tool can give you an advantage. If the issue refers to contacts extracting, then, Getprospect is a solution. If you have multiple products and many platforms or channels, MailChimp or Hubspot can be a pick. 

If you need some help with templates, picking an email automation service focusing on their designation would increase your engagement rates. Lastly, if you lack segmenting, Drip and Convertkit have efficient mechanisms and reporting to work with contacts’ data.

Source Prolead brokers usa

the coming college crisis
The Coming College Crisis

Abandoned Universities

This article is also available as a podcast on Spotify.

Education, especially college education, is facing an existential crisis. Partially due to demographic factors, and in part due to decisions made by policy-makers at national, local, and academic levels, colleges and universities are struggling to stay afloat. What’s more, there are signs that conditions are likely to get far worse for the academic world for at least the next couple of decades. The question this raises ultimately comes down to “what is it that we as a society want out of our education institutions, and what is likely going to need to change for them to survive moving forward?” 


We are less than half way through a broad decline in the birth rate globally since the early 1990s, with the worst yet to come.

The Looming Baby Bust

While there are many factors that influence the future, there are a few indicators that futurists watch as closely as the birth rate. This measure – the number of births per 1000 women per year – has a profound influence upon everything from the economy to the rate of innovation to social trends. If you know how many people are born today, you have a surprisingly good idea about what the world will look like in 30-40 years, more even than technological trends or sociological shifts are likely to influence.

https://i0.wp.com/thecaglereport.com/wp-content/uploads/2021/05/united-states-population-2021-05-02-macrotrends-1.png?resize=300%2C178&ssl=1 300w, https://i0.wp.com/thecaglereport.com/wp-content/uploads/2021/05/united-states-population-2021-05-02-macrotrends-1.png?resize=768%2C455&ssl=1 768w” sizes=”(max-width: 525px) 100vw, 525px” />

There have been a few major inflection points in the birth rate over the last century. The rate had increased and decreased with regularity in the United States in particular (where this article will remain focused) until the Great Depression in 1929, where it was at an historical low. However, by 1936 the population had started to increase dramatically, ultimately resulting in the single largest increase percentage wise in population of any generation in the last four hundred years. By 1955, when the population known as the Baby Boomers peaked, there were 3.9 births per 1000 women, which translated roughly to a family size of almost four children per set of parents.

For reference purposes, a population needs to have 2.1 births per 1000 women for the population to otherwise remain stable (for births to exceed deaths). It’s worth noting that the United States has not been above this reproduction rate since 1972, which means that the entire increase in population in the United States since then has been due to immigration.

There was a second echo boom that peaked about 1993 (when the birth rate was just a hair under the stability rate at 2.07), plateaued before peaking again in 2008, then started declining dramatically thereafter before hitting bottom (?) in 2019. The birth rate for 2020 was only slightly above where it was in 1972, at 1.78 births per woman. We could be looking at the start of a new cycle at this point, but the stability point is also determined by the death rate, which has taken a significant hit due to Covid-19.

There are a number of implications that this brings, especially with regards to the educational system. Someone born in 1991 is now 30. By 2010, the number of entering college students peaked and plateaued for about four years. The students entering college today were born after 9/11, and colleges and universities are already seeing a drop-off in the number of students. However, in four more years, we will start seeing students reaching 18 (nominally college age) who were born on or after 2007.

What’s significant about that year? That was the year the Great Recession hit, when there was a sharp drop off in the birth rate that would end up lasting more than a decade. Federal funding of post-secondary education had been declining, leaving states to pick up the slack at precisely the time they were already hammered by lower tax revenues due to declining enrollment from the recession. This pushed more students (and their families) into taking out student loans, saddling those same students with higher school debt even as jobs remained scarce, and putting a damper on college for those coming in since then.

In 2025, those born in 2007 will start going to college, but there will be fewer of them, not just in relative terms but even in absolute terms, and this trend will continue until at least 2039, when those born today start college. What that means in practice is that colleges and universities are going to be facing the biggest student drought in history, with enrollment down by as much as 25% from current levels by the end of it, assuming the current composition of students.

https://i0.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_379116225.jpeg?resize=300%2C200&ssl=1 300w, https://i0.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_379116225.jpeg?resize=768%2C513&ssl=1 768w, https://i0.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_379116225.jpeg?resize=1536%2C1025&ssl=1 1536w, https://i0.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_379116225.jpeg?resize=2048%2C1367&ssl=1 2048w, https://i0.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_379116225.jpeg?resize=1200%2C801&ssl=1 1200w” sizes=”(max-width: 525px) 100vw, 525px” data-recalc-dims=”1″ />
Covid has forced remote learning to advance societally by at least five, and maybe ten years, catching many schools flat-footed.

The Pandemic Challenges Academia

The Covid-19 pandemic is another system-level shock, albeit one that will likely fade over time in terms of impact. One thing that it has done, however, is to force the adoption of online telepresence to happen five to ten years sooner than it would have otherwise. Put another way, without the influence of the pandemic, social inertia would have likely meant that it would take another decade to get to a point where we’d be interacting at the same level we are now. Once the threat of Covid-19 finally fades (hopefully by the end of 2021), there will be some return to older patterns, but far less than many managers today currently believe.

For businesses, the move towards working from home has been mixed, especially outside the digital space. Digitally oriented businesses have generally thrived during the pandemic, but physically based businesses, from restaurants to manufacturing to entertainment to mostly brick and mortar retail, have been hit hard. In theory, colleges and universities should have been able to adapt, but while most universities would have seen themselves as being digital in nature, the physical constraints and tacit assumptions that colleges work under proved far more limiting than expected.

Covid uncovered an uncomfortable realization. It was perfectly possible for students to attend remotely, assuming that most universities have only a small percentage of students attending remotely. However, all too few schools actually had the infrastructure to go wholly virtual, and the requirements and complexities inherent in maintaining a large scale telepresence operation went far beyond what all but the most far-sighted of university chancellors had foreseen. Without the inadvertent cocooning effect that many schools had because of these assumptions, the attractiveness of universities as institutions was called into question. With tuition dropping, other sources of income – from football game revenues to the chance to appeal to alumnae – also suffered, making it evident just how dependent these institutions were on the notion of geophysicality, and the funding that came as a consequence of that.

Many colleges are also facing lawsuits from students or their families as those students were already charged for classes that were cut short, and tuition revenue has continued to drop as students became unsure whether or not they would, in fact, be returning to physical campuses in the fall of 2020 (or the winter of 2021, or the spring of 2021, for that matter). This has starved university budgets that were already being overtaken by administrative and facility costs, with the very real likelihood that by the end of the 2022 school year, many colleges and universities will be hopelessly in the red with little in the way of support revenue.

Covid-19 hit at the worst possible time for Universities, ironically because of the effects of the Internet (itself largely an academic innovation) on the availability and dissemination of information. One of the primary values of universities has long been its role in the acquisition of specialized knowledge by students, with a secondary value being that such schools also provided the means to certify that a person had a sufficient grasp on that knowledge to be able to employ it. Yet while university costs have climbed, overall the availability of that knowledge outside the formal educational system has also increased, raising the appearance that universities are less about teaching than they are about certification and gate-keeping. Given the fact that wages have, outside of a few in-demand verticals, remained largely stagnant, this has caused a growing number of students to question the value of that education in the face of rising tuition.

To make matters worse, corporate training programs and certifications are now competing with university degrees as sources of accreditation, and are increasingly becoming preferred over four-year or higher degree programs by employers, especially in areas where technology is changing so quickly. These programs also typically attract teachers that may not necessarily have advanced degrees, but often have developed industry knowledge through experience in the field.

Finally, there has been a hollowing out of the pipeline of graduate students and assistant professors as wages have generally not kept pace with even the tepid rise in wages in the private sector for skilled talent, and as the ultimate academic prize – tenure – has been phased out in university after university. While associate professorships and above have generally paid moderately well, typically the principal benefits that have accrued there come not to those who publish, but to those who patent, usually with the university claiming a significant chunk of license royalties.

These were existential problems even before Covid-19 manifest itself, but with the severe market downturn in 2020, potential students and their parents began to raise the question most university provosts dreaded hearing – was the value to be gained by the students worth the cost, especially when that cost might take decades to repay in full?

A university is a business, and any business that fails to adapt to changing market conditions will likely fail. The triple threat of demographically-induced declining student enrollment, the rise of the Internet-enabled competitors challenging the fundamental nature of education, and an overreliance upon external factors such as sports revenues and aggressive claims on patents (exacerbated by the Pandemic undercutting both), have all contributed to a situation where the question comes down to not if education is likely to collapse, but when.

https://i0.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_383232002.jpeg?resize=300%2C200&ssl=1 300w, https://i0.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_383232002.jpeg?resize=768%2C512&ssl=1 768w, https://i0.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_383232002.jpeg?resize=1536%2C1024&ssl=1 1536w, https://i0.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_383232002.jpeg?resize=2048%2C1365&ssl=1 2048w, https://i0.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_383232002.jpeg?resize=1200%2C800&ssl=1 1200w” sizes=”(max-width: 525px) 100vw, 525px” data-recalc-dims=”1″ />
Universities bring together experts to provide them the forum for expressing their knowledge. As the Internet makes expertise available at the click of the button, what impact will that have on traditional schools.

The Democratization of Expertise

In any discussion about academia – colleges, universities and post-graduate schools – one of the usually-unasked questions involves the role that these institutions need to play in a healthy society. Do we need a post-secondary education system as a society? I would argue that, if anything, we need it now more than ever because the need to learn has only grown in the last half-century.

The skills that you need for almost any job today have certainly changed wildly even from what they were a decade ago. Even in fields that would seem on the surface to be fairly timeless – such as archeology – changes in other disciplines (such as genetics, computer visualization, the rise of drones and satellites, and material science) have forced a radical re-evaluation of the models that we’d assumed fixed even a few decades before. The rise of ever more powerful digital tools is giving experts within any given field lenses that would have seemed fantastical at the start of their career.

Indeed, one of the hallmarks of the early twenty-first century is the emergence of the subject matter expert or research specialist as a key part of any organization. Similarly, while information technologies are eroding the traditional role of librarians in society, what they do today – helping to create information systems, establish classification models and taxonomies, and provide the infrastructure to better perceive insight from that information system – is critical for every organization. The technologies to manage these things are young, many appearing in the years since these data curators went to school and without an educational infrastructure in place, there is little cohesiveness for gaining the skills that are necessary to work with these tools.

Moreover, what that education should provide is not necessarily how to use the tools, but rather the context to make the most use of those tools. It has been well demonstrated that most innovations, in science, technology, the arts and elsewhere, occur when disciplinary domains collide. The most exciting discoveries in the world of archeology, for instance, are not coming about because of an uptick of new digs, but because archeologists are now able to synthesize their own domain knowledge with what’s coming out of population genetics, are able to visualize what ancient cultures look like with the use of artificial intelligence and architectural visualization, and able to take advantages of advances in climatology to get a better sense of the overall gestalt of those long-gone cultures.

These discoveries and innovations are occurring not because John, a geneticist, met Jane, an archeologist, at the rare university interdisciplinary luncheon, but because the Internet has made it possible to be aware of what’s going on outside of one’s discipline and having done so, encouraged communication between potential colleagues. To use a knowledge management metaphor, expertise has become siloed in universities.

This is not just an academic issue, admittedly. The same expertise has also become siloed within corporate organizations, as organizations, including publishers, want to be able to monetize their experts by controlling access to them. This has always been a thin edge for organizations to walk, as expertise is also typically expensive, but at the same time, the value of that expertise is not so much in what the expert can do but in the reputation that they bring to an organization.

As the reputation economy becomes more dominant, this too provides a conundrum for universities and corporations alike. Reputation comes about due to exposure, and one role that universities in particular play is to provide that forum for exposure to expertise. However, pre-Internet, that exposure came about primarily by being able to reach out to 200-300 people in a small amphitheater on a weekly or biweekly basis. Today, a high school student can have a following of a million people in a live stream, and the wages that the university would pay to that professor are dwarfed by what can be made from advertising revenues on YouTube.

Not surprisingly, those at the lowest rungs of the the educational hierarchy have taken a keen interest in what’s happening here. Ironically, tenure may be to blame here. If you think of a university as a network, advancement takes place through vacancies. In most corporations vacancies take place all the time: people leave for higher paying positions or are promoted up through the ranks, and the need to fill those positions ensures a certain degree of mobility within the organization.

In universities, on the other hand, tenure means that available positions open up far more slowly, not just within any given university but in all of them where tenure is present. This in general creates a Hobson’s choice for senior administrators – expel professors for real or imagined wrongs (creating a public relations nightmare), push more tenure track professors into administrative roles that they may be ill-suited for (and in general adding to the administrative overhead while reducing the reputation value of that professor), or live with a certain degree of churn at the bottom as grad students and non-tenured professors become disillusioned. As the goal of hiring very intelligent people is to increase the reputation of the college, none of these is or should be palatable choices, but they are all too often made without much strategic thought.

https://i1.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_142534929.jpeg?resize=300%2C219&ssl=1 300w, https://i1.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_142534929.jpeg?resize=768%2C560&ssl=1 768w, https://i1.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_142534929.jpeg?resize=1536%2C1121&ssl=1 1536w, https://i1.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_142534929.jpeg?resize=2048%2C1494&ssl=1 2048w, https://i1.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_142534929.jpeg?resize=1200%2C876&ssl=1 1200w” sizes=”(max-width: 525px) 100vw, 525px” data-recalc-dims=”1″ />
The ability to virtualize and componentize books has already had a profound impact upon school life (and reduced an epidemic of lower back pain) but this also makes it easier for non-universities to compete for resources.

The Decoupling of The Textbook

A similar factor involves the production of textbooks. The more forward-thinking textbook publishers came to the realization fifteen to twenty years ago that with the advent of the Internet (and eBooks), their business needed to adapt, or it would die. Textbooks are expensive to publish, require a huge amount of work to pull together, and typically have a comparatively small audience, all of which result in textbooks often costing from $50 to $250 or more. On the other hand, professors and universities both wanted to have their names on a textbook, especially if it became a de-facto standard, because it increased their respective authority in the field. The high costs of producing such textbooks also provided yet another gating factor towards keeping out competitors, as you had to be fairly large to field both the financial and production wherewithal to create them.

The eBook, digital production, and the Internet as a distribution platform, changed that equation completely. Instead of focusing on the book, the publisher began focusing on the chapter, with the idea being that good editing could make chapters become largely stand-alone entities. Digital production meant that you could aggregate different sets of chapters together, possibly with some intelligence in those chapters to allow for differences in graphical branding, as well as the use of specific metadata that would control how content would be displayed in different contexts. Semantic linking to tie related content together and the introduction of search capabilities have also been deployed, neither of which existed in any meaningful way for printed books.

Digital, distributed publishing further meant that such chapters could be combined with others upon request (or even made available as solo content), and eBook distribution platform meant that books could be made available upon your phone, laptop, or tablet as necessary, which reduced both distribution costs and cut down dramatically on student chiropractic bills. That such an approach gave instructors much more say in presenting the material that they felt was relevant was a nice side benefit, while also giving professors the chance to publish parts of more work while still focusing on teaching and/or research, rather than having to take sabbaticals (and the likely financial hit that would come because of it) to solely author a single textbook. These approaches increased their citation count, while at the same time facilitating revenue earlier and faster.

Not surprisingly, as the Internet has made the dissemination of ever more complex forms of media trivial, this has also meant that many instructions now regularly supplement their income with the creation not just of written text but of full curriculum materials, adding to their reputation in the process. This has changed the nature of the relationship between instructor and university, shifting from being an employment arrangement to an affiliation relationship. Not surprisingly, while this may reduce the direct costs to the University, it comes at the expense of a weakening in the relationship between the two agents, part of a broader trend occurring between organizations and the people who work with (rather than ostensibly for) them.

Of course, it should be noted that the publishers that didn’t adapt are no longer around, having been acquired for their catalogs by those that did.

Therein lies a strong cautionary tale for universities. The Case of Too Many MBAs provides another such tale. The very first Master of Business Administration, or MBA, was presented by Harvard University in 1908. For a number of years, the exclusivity of the degree meant that it was a highly sought-after certification. By the 1970s, most major universities with graduate schools had MBA programs, and there was a growing backlash as people who were hired because of the MBA were proving unable to be effective managers in those businesses that required specialized technical knowledge – which usually meant most companies. By 2020, MBAs were given about the same weight as a Master’s degree in any other field, and in many cases, less.

The MBA was devalued by ubiquity. Unfortunately, the Internet is all about ubiquity.

https://i2.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_359653278.jpeg?resize=300%2C150&ssl=1 300w, https://i2.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_359653278.jpeg?resize=768%2C383&ssl=1 768w, https://i2.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_359653278.jpeg?resize=1536%2C767&ssl=1 1536w, https://i2.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_359653278.jpeg?resize=2048%2C1022&ssl=1 2048w, https://i2.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_359653278.jpeg?resize=1000%2C500&ssl=1 1000w, https://i2.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_359653278.jpeg?resize=1600%2C800&ssl=1 1600w, https://i2.wp.com/thecaglereport.com/wp-content/uploads/2021/05/AdobeStock_359653278.jpeg?resize=1200%2C599&ssl=1 1200w” sizes=”(max-width: 525px) 100vw, 525px” data-recalc-dims=”1″ />
The ivory towers may still remain, but geolocation-based mass-education is already fading compared to personalized education – tailored to the individual, focusing more on certification than credentialism, and no longer dependent upon being there.

The Future of College Education

Given all of these factors, it is a safe bet to expect that Academia will need to change dramatically to survive. There are several trends that will shape what the education of tomorrow (and it may be a very soon tomorrow) will look like. This includes many or all of the following:

  • Distributed and De-Localized Education. The trend of untying education from a physical place and making it virtual was already underway even before Covid-19, but it has accelerated dramatically in the aftermath of the Pandemic. In effect, universities are becoming media publishing companies, with the media in question being “educational” in nature. Hybrid models may very emerge as a consequence, but such a hybrid would treat universities as being more like retreats rather than physical institutions.
  • Decommissioning Campuses. Just as many banks are becoming nervous about commercial downsizing of both office space and production facilities, so too should university boards of directors be sweating the closure and decommissioning of campuses in favor of virtual instruction. A significant number of college campuses were either constructed or expanded in the 1960s and early 1970s as the Boomer generation started to go to college. These facilities are now 50-60 years old and showing their age as the costs to keep these campuses functional continue to climb. As classes empty, the incentive to just sell off the campus property outright will become unavoidable.
  • Become Global. The flip-side of this is that the location (or even nationality) of the student should not be a gating criterion. The European Union already provides an example of this, where anyone within the EU can attend an EU university without penalty for being “out of state” or an “international student”.
  • A Move Towards Certification Rather Than Credential Oriented. In many respects, academia needs to become more agile, and one way it can do that is to reduce the overall time it takes to receive some form of certification, possibly down to the half-quarter or quarter level (i.e., six weeks to three months).
  • Make All Education Continuing. One major problem with the credential approach is that it tends to force education into the period from age 18 to 25 with continuing education often given short shrift by universities, which sees it as not profitable. However, by moving towards a certification model, you give people who are in the midst of their career the opportunity to learn from the best, without forcing them to take a hiatus from their careers. It also increases the “fatness” of the tail, so that the drop in immediate post-secondary students can be offset by more, older students.
  • Sell Certification, Not Education. Colleges need to acknowledge what has largely been unsaid for decades – you go to school to get the certificate, not to get an education. Once you do that, it opens up an alternative way towards paying for education: you can take the class repeatedly for free, but, especially with advanced classes, you only get credit (and only pay for credit) when you pass.
  • Move Towards Electronic Mediation. Teaching a class with ten thousand students is far different from teaching with thirty, especially when it comes to grading homework and tests. Using a combination of AI for essays, electronic mediation of tests, crowd-sourcing, and rules-based analysis, an instructor can more readily identify where students are having trouble, which is the real value of tests. Currently, most of this work is done by graduate students, at the cost of their own studies.
  • Individualized Curricula of Study. Most traditional universities work upon the assumption that there are specific courses that you have to have, in a specific sequence, in order to achieve a degree. This may force a student to take courses that they have already mastered, that are outside of their desired area of expertise, and in general serve to reduce the possibility of cross-specialization, at a time when cross-specialized is a highly desired trait. If a student lacks the skills to pass an advanced class, then they will drop back to a simpler class at no penalty.
  • Student Community MOOCs. Give more senior students the tools to create educational materials for junior students. Build an active community of users, combining the features of MOOCs and virtual worlds, with some mediation from instructors, and give credit for this in completing advanced classes.
  • Drop Educational Degree Requirements for Instructors. Teaching is a skill, like any other, and you do not need a Master’s (a six-year degree) or Ph.D. (an eight-year degree) to teach. There are a great number of people with real-life skills and experiences that are prohibited from teaching because they cannot take the time from already busy careers to spend two years learning what can be taught in six months. They are going elsewhere to teach, to academia’s loss.
  • Spin Off Research Institutes. This focuses on the fact that all too frequently professors treat their graduate students as free labor, often taking advantage of their ideas and input while delivering little of value in return. This is a toxic relationship that has become institutionalized. Universities would be better served spinning off their research work as separate businesses and then hiring graduate students as associate researchers. There is nothing that says that a professor cannot be employed in both capacities, if he or she so chose, but by making these two different roles, you don’t end up with brilliant researchers but awful teachers being forced to teach (or vice versa). This also holds true for the arts and humanities (consider the Clarion Writer’s conference as being a graduate school for writers).
  • A League of Their Own. Football has been a defining trait of universities in the US for decades, but that’s an aberration. Baseball, hockey, and similar team sports have a tiered farm-team system where young players are able to concentrate on learning the basics, why not football? As with research institutes and retreats, it’s possible for a university to spin off its football team into an (already extant) league, letting them negotiate both physical and pay-per-view rights independently of students seeking an education while still maintaining branding ties with them and supporting revenues.
  • Integrate Vendors and Companies. Education is an expensive proposition for most corporations, and many are loathe to create extensive training processes without some kind of financial backstop to make training revenue neutral at a minimum. One idea that may come about from reimagining the post-academic world is to build out a conceptual platform that both governmental and private organizations can plug into. This way, organizations can specialize in providing education that may be of value to both customers and users while at the same time being able to be findable within a broader network of curricula.
  • Begin With the Community Colleges. Most community colleges are already experimenting with several facets described here, and there’s a push at the federal and increasingly the state level to make at least the first two years of community college free. This is a good thing, but it also needs to be done in such a way as to be both sustainable long term and largely protected from shifts in the direction of political winds.

Speculative author William Gibson has been attributed as saying “The future has arrived — it’s just not evenly distributed yet.” This holds very much true for post-secondary education. No, we’re not going to see a wasteland of boarded up university campuses any time soon, but it is very likely that even while ivy-covered halls will continue to stand for decades, its important to realize that the underlying nature of how we educate people is undergoing radical change right now, and that what education will look like a decade from now may look very, very different than it is today.

Kurt Cagle is the editor of The Cagle Report, and is the Community Editor for DataScienceCentral.com. He lives in Issaquah, Washington with his family and his cats.

Source Prolead brokers usa

Pro Lead Brokers USA | Targeted Sales Leads | Pro Lead Brokers USA Skip to content