Helge Scherlund's eLearning News

Rss feeds Helge Scherlund's eLearning News
Check out the weblog every day and keep up-to-date on the latest news and information about flexible, netbased learning and teaching, e-learning, blended learning, distance learning and m-learning. Links to the best web pages on the internet, articles etc. and conferences and seminars about e-learning. Mediation of knowledge and experiences within research and development of the modern digital, interactive media. I hope that you find this service useful and have a good time reading!

URL: http://scherlund.blogspot.com/

Aggiornato: 4 giorni 21 ore fa

How To Start A Data Science Career As An Undergrad | Forbes

"How do I choose an internship that prepares me for a data science career as an undergraduate student? originally appeared on Quora: the place to gain and share knowledge, empowering people to learn from others and better understand the world." Quora, Contributor.

Answer by Alex Francis, Data Scientist, on Quora:

Photo: Hemant Mishra/Mint via Getty Images
How do I choose an internship that prepares me for a data science career as an undergraduate student? 
I think the answer to this question really depends on the company/role/industry combination, but if your background resembles my own, I’ll take a stab at the question.
To start, I strongly believe that completing an internship is more valuable than an “ML related summer research project,” unless that research is done in the context of a respected laboratory at your university, and you have the explicit goal of publishing a paper that will help you gain admission to top graduate programs in machine learning. With that said, while internship roles in data science at tech companies are plentiful (see, for example, What companies have data science internships for undergraduates?), finding companies that actively hire undergraduates is non-trivial (in my experience). You’ll need to be aggressive, sometimes applying for and following up with recruiters on roles in which a graduate degree is “recommended” or even “required.” Finding companies that are willing to take a chance on a younger candidate will be an inevitable filter - luckily, several great companies are willing to engage with undergraduates. I evaded this artificial barrier by interning as a “data engineer,” and working on infrastructure related to the data science team. This gave me valuable insights into the day-to-day efforts of a data scientist.
Secondly, I highly recommend choosing to work on a product with which you have some familiarity. This is the most underrated element of the decision-making process, in my opinion. As a data scientist, you will constantly be called upon to generate and test hypotheses about the product, produce insights, and suggest future directions. If you’re an active user of the product, this isn’t nearly as difficult — in fact, it’s often fun! Targeting companies that create products you love will make you a better interviewer and a better employee...
To summarize, successfully navigating a data science career is probably not so different than managing a career in any other field: set some goals, be aggressive, know your worth, figure out what’s fun and what isn’t, reset those goals, and repeat the cycle. Best of luck! Read more...
Source: Forbes

NASA Explores Artificial Intelligence for Space Communications | NASA

"NASA spacecraft typically rely on human-controlled radio systems to communicate with Earth. As collection of space data increases, NASA looks to cognitive radio, the infusion of artificial intelligence into space communications networks, to meet demand and increase efficiency" continues NASA.

This photo was taken of NASA's Space Communications and Navigation Testbed before launch. Currently affixed to the International Space Station, the SCaN Testbed is used to conduct a variety of experiments with the goal of further advancing other technologies, reducing risks on other space missions, and enabling future mission
Photo: NASA
“Modern space communications systems use complex software to support science and exploration missions,” said Janette C. Briones, principal investigator in the cognitive communication project at NASA’s Glenn Research Center in Cleveland, Ohio. “By applying artificial intelligence and machine learning, satellites control these systems seamlessly, making real-time decisions without awaiting instruction.”

To understand cognitive radio, it’s easiest to start with ground-based applications. In the U.S., the Federal Communications Commission (FCC) allocates portions of the electromagnetic spectrum used for communications to various users. For example, the FCC allocates spectrum to cell service, satellite radio, Bluetooth, Wi-Fi, etc. Imagine the spectrum divided into a limited number of taps connected to a water main.

What happens when no faucets are left? How could a device access the electromagnetic spectrum when all the taps are taken?

Software-defined radios like cognitive radio use artificial intelligence to employ underutilized portions of the electromagnetic spectrum without human intervention. These “white spaces” are currently unused, but already licensed, segments of the spectrum. The FCC permits a cognitive radio to use the frequency while unused by its primary user until the user becomes active again.

In terms of our metaphorical watering hole, cognitive radio draws on water that would otherwise be wasted. The cognitive radio can use many “faucets,” no matter the frequency of that “faucet.” When a licensed device stops using its frequency, cognitive radio draws from that customer’s “faucet” until the primary user needs it again. Cognitive radio switches from one white space to another, using electromagnetic spigots as they become available.

“The recent development of cognitive technologies is a new thrust in the architecture of communications systems,” said Briones. “We envision these technologies will make our communications networks more efficient and resilient for missions exploring the depths of space. By integrating artificial intelligence and cognitive radios into our networks, we will increase the efficiency, autonomy and reliability of space communications systems.”

For NASA, the space environment presents unique challenges that cognitive radio could mitigate. Space weather, electromagnetic radiation emitted by the sun and other celestial bodies, fills space with noise that can interrupt certain frequencies.

“Glenn Research Center is experimenting in creating cognitive radio applications capable of identifying and adapting to space weather,” said Rigoberto Roche, a NASA cognitive engine development lead at Glenn. “They would transmit outside the range of the interference or cancel distortions within the range using machine learning.” 

In the future, a NASA cognitive radio could even learn to shut itself down temporarily to mitigate radiation damage during severe space weather events. Adaptive radio software could circumvent the harmful effects of space weather, increasing science and exploration data returns.
Read more...

Source: NASA

Writing Music with the Mind: New BCI Modality Offers the Power to Make Music as well as Play It | Evolving Science - Computer Science & Technology

Photo: Deirdre O’Donnell"Brain-computer interfaces (BCIs) that allow people with severe neuromotor or motor disorders to communicate are becoming more and more common" says Deirdre O’Donnell, professional writer for several years. Deirdre is also an experienced journalist and editor.

Music brain. 
Photo: (CC BY-SA 4.0)
This is realised by scanning brainwaves using electroencephalography (EEG) and converting them accurately into words, letters or other objects that the user intends to replicate in their minds. BCIs are beneficial for those with extensive paralysis, ‘locked-in’ syndrome and other similar conditions.

EEG-powered BCIs have a number of advantages; it is non-invasive, (the brainwaves are ‘picked up’ through the skull using electrodes integrated into a head-hugging cap) well-validated, well recognized, and often relatively cheap. In addition, certain ranges of brainwave (or event-related potentials (ERPs), as some are also known) have also been exhaustively studied and exploited for the purposes of BCI. They include the posterior-dominant rhythm, which is associated with the high-fidelity selection of notes and other musical objects on virtual instruments, and the P300 wave that enables patients to complete tasks such as selecting numbers in a BCI interface. As such, some researchers suggest that P300 could also be used to specify and select musical notes in those who need to write music using BCI. A recent study published in PLOS One suggests that this is indeed possible. This is good news for patients who want to write their own music completely hands-free as well as play it.

Composing music through thoughts 
This team, based at the Institutes of Neural Engineering and Psychology at Graz University of Technology, developed a BCI interface that was also loaded with software supporting musical composition. They based their work on existing BCIs that allow patients to paint using software somewhat similar to many conventional drawing or art apps found on everyday computers, and also previous studies that showed a 75 percent accuracy rate in selecting notes from the C major scale using a modified BCI spelling programme. The team, supervised by Gernot Müller-Putz of the Neural Engineering institute, hypothesised that such BCI musical composition was also applicable to a more extensive range of options. In other words, they linked their EEG receivers to the full musical composition suite MuseNote. A previous pilot study evaluating this approach in five healthy individuals found that this group could input a given melody into the software via the BCI with an accuracy of up to approximately 96 percent, although 40 percent of them could complete the task at an accuracy of about 50 percent. Therefore, the group designed a new experiment in which they recruited 18 healthy volunteers with musical backgrounds, including a professional composer, to test their ability to spell, write a full, pre-determined melody and compose original sequences using this BCI system.

MuseScore 2.0 Preview


The team used a fairly conventional BCI system, which consist of EEG detection and recording hardware, P300-wave-based software (written largely in C with Matlab for signal processing) and the MuseNote software, which was controlled in turn by the P300 software.
Read more...  

References
Pinegger A, Hiebel H, Wriessnegger SC, Müller-Putz GR. Composing only by thought: Novel application of the P300 brain-computer interface. PLoS ONE. 2017. 12(9): e0181584. https://doi.org/10.1371/journal.pone.0181584
Deuel TA, Pampin J, Sundstrom J, Darvas F. The Encephalophone: A Novel Musical Biofeedback Device using Conscious Control of Electroencephalogram (EEG). Frontiers in Human Neuroscience. 2017;11(213).

Source: Evolving Science and MuseScore HowTo Channel (YouTube)

But What If They Cheat? Giving Non-Proctored Online Assessments | Faculty Focus - Online Education

"As online education continues to grow, so does the potential for academic dishonesty" summarizes Sheryl Cornelius, registered nurse who has been teaching for the last 15+ years in universities and community colleges.

Photo: Faculty Focus
So how do you ensure your online students are not cheating on their tests? Bottom line, you don’t. But there are ways to stack the deck in your favor.

The good news is it’s not as bad as you think. A 2002 study by Grijalva, Kerkvliet, and Nowell it found that “academic dishonesty in a single online class is no more prevalent than in traditional classrooms” (Paullet, Chawdhry, Douglas & Pinchot, 2016, pg. 46). Although the offenders have become quite creative in their endeavors, the prevention remains the best defense.

First, start by creating a culture of integrity. Many institutions have students review the school’s Honor Code and sign a “pledge.” The first question on every exam I give is True/False, “I will follow the Honor Code while taking this assessment.” It follows the similar rule that locked doors are for honest people, but it also serves as a good reminder of the possible consequences, which often is enough to keep many students from breaking the rules.

Second, do not set rules that you have no way to enforce, e.g. forbidding the use of books, notes, or other resources. Instead ask questions that will not be evident in the resources, such as items where students have to analyze, evaluate, and think critically about the content. Essay questions, case study analysis, fill in the blanks, sequencing questions, and hot spot questions are difficult to look up. It also helps to set a time limit for the test so that Googling answers becomes impossible.

Third, make every assessment different. No, I am not saying create 25 exams, but you can scramble questions and create multiple versions of the same test. If everyone finishes the exam with an essay question, you can create three different questions and have one randomly assigned to each exam. If you have deep enough test banks, you can have several different test versions with no question being repeated. Anything you can do to mix up the versions can detour efforts of deceitful activity.

Many instructors withhold feedback until the exam has closed. In this way no one can pass on answers to others. Some will have the exam synchronous for this very reason. However, making the exam synchronous takes away the flexibility for online students that work unusual shifts.
Read more...

Source: Faculty Focus

Will artificial intelligence become conscious? | WTOP - National News


Photo: Kak Subhash
Author: Subhash Kak, Regents Professor of Electrical and Computer Engineering, Oklahoma State University.
Original article: https://theconversation.com/will-artificial-intelligence-become-conscious-87231

The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.
(THE CONVERSATION) 
Forget about today’s modest incremental advances in artificial intelligence, such as the increasing abilities of cars to drive themselves. 

Photo: FreeDigitalPhotos.net
Waiting in the wings might be a groundbreaking development: a machine that is aware of itself and its surroundings, and that could take in and process massive amounts of data in real time. It could be sent on dangerous missions, into space or combat. In addition to driving people around, it might be able to cook, clean, do laundry – and even keep humans company when other people aren’t nearby. 

A particularly advanced set of machines could replace humans at literally all jobs. That would save humanity from workaday drudgery, but it would also shake many societal foundations. A life of no work and only play may turn out to be a dystopia. 

Conscious machines would also raise troubling legal and ethical problems. Would a conscious machine be a “person” under law and be liable if its actions hurt someone, or if something goes wrong? To think of a more frightening scenario, might these machines rebel against humans and wish to eliminate us altogether? If yes, they represent the culmination of evolution. 

As a professor of electrical engineering and computer science who works in machine learning and quantum theory, I can say that researchers are divided on whether these sorts of hyperaware machines will ever exist. There’s also debate about whether machines could or should be called “conscious” in the way we think of humans, and even some animals, as conscious. Some of the questions have to do with technology; others have to do with what consciousness actually is. 

Most computer scientists think that consciousness is a characteristic that will emerge as technology develops. Some believe that consciousness involves accepting new information, storing and retrieving old information and cognitive processing of it all into perceptions and actions. If that’s right, then one day machines will indeed be the ultimate consciousness. They’ll be able to gather more information than a human, store more than many libraries, access vast databases in milliseconds and compute all of it into decisions more complex, and yet more logical, than any person ever could. 

On the other hand, there are physicists and philosophers who say there’s something more about human behavior that cannot be computed by a machine. Creativity, for example, and the sense of freedom people possess don’t appear to come from logic or calculations. 

Yet these are not the only views of what consciousness is, or whether machines could ever achieve it. 

Another viewpoint on consciousness comes from quantum theory, which is the deepest theory of physics. According to the orthodox Copenhagen Interpretation, consciousness and the physical world are complementary aspects of the same reality. When a person observes, or experiments on, some aspect of the physical world, that person’s conscious interaction causes discernible change. Since it takes consciousness as a given and no attempt is made to derive it from physics, the Copenhagen Interpretation may be called the “big-C” view of consciousness, where it is a thing that exists by itself – although it requires brains to become real. This view was popular with the pioneers of quantum theory such as Niels Bohr, Werner Heisenberg and Erwin Schrödinger. 

The interaction between consciousness and matter leads to paradoxes that remain unresolved after 80 years of debate. A well-known example of this is the paradox of Schrödinger’s cat, in which a cat is placed in a situation that results in it being equally likely to survive or die – and the act of observation itself is what makes the outcome certain. 

The opposing view is that consciousness emerges from biology, just as biology itself emerges from chemistry which, in turn, emerges from physics. We call this less expansive concept of consciousness “little-C.” It agrees with the neuroscientists’ view that the processes of the mind are identical to states and processes of the brain. It also agrees with a more recent interpretation of quantum theory motivated by an attempt to rid it of paradoxes, the Many Worlds Interpretation, in which observers are a part of the mathematics of physics. 

Philosophers of science believe that these modern quantum physics views of consciousness have parallels in ancient philosophy. Big-C is like the theory of mind in Vedanta – in which consciousness is the fundamental basis of reality, on par with the physical universe.
Read more...

Source: WTOP

16th annual New College holiday book guide | Sarasota Herald-Tribune

Each year, the Herald-Tribune invites faculty, administrators and staff at New College of Florida to recommend books to our readers that had a particular impact on them. They don’t have to be new.

Photo: Storyblocks.com
Would you like to choose a gift that inspires someone to think about life’s most important questions? Many of this year’s books explore the challenges of understanding our relationship to the past, achieving wisdom or practical success, or remaking the world by opening ourselves to new ideas. Authors reevaluate the New Deal, examine the religious revival in China, lay out 34 plans for newly enlightened national policies, or rescue the skills of their ancestors from being erased by history. Literary works range from the grim humor of historical fiction to a blend of fantasy and reality in an environmental novel. In graphic novels from different cultures, authors reinvent “Frankenstein” by portraying a black scientist reanimating her child, or reimagine Frankenstein as a creature made of war dead in the Arab world.

Wishing everyone a year filled with rewarding books, we present the 16th annual New College Holiday Book Guide, with reviews compiled by English professor Andrea Dimino.  

Donal O’Shea, President of New College
 
Photo: Donal O’SheaThe recent book “Prime Numbers and the Riemann Hypothesis” (Cambridge University Press, 2016) by Barry Mazur and William Stein, two accomplished mathematicians and fine prose stylists, will make a great gift for a curious student. Using the graphical methods found in calculus reform texts, this beautiful little book allows a patient reader with a good grasp of first-year calculus to explore the most famous unsolved problem in mathematics, the so-called Riemann Hypothesis, and to understand why it points to as yet undiscovered regularities in the distribution of prime numbers.

Bernhard Riemann introduced the hypothesis in 1859 in a brief, dazzling paper that seems to have come out of some strange alien universe. The Riemann Hypothesis has since become the Holy Grail for generations of mathematicians, and continues to resist all attempts at solution. The process of establishing weaker variants has resulted in major mathematical advances, and made careers. Until now, however, the deeper significance of the hypothesis has been inaccessible to those without advanced mathematical training. Mazur and Stein’s book fixes that, and is itself a first-rate intellectual achievement. 
Read more... 

Source: Sarasota Herald-Tribune 

AP Computer Science Principles Paves the Way to STEM Success | Black Enterprise

Follow on Twitter as
 @RobinWhiteGoode"It’s Computer Science Education Week—and there’s probably no better time to look at where we are as a nation in accomplishing our computer science goals" according to Robin White Goode.

Photo: The College Board
“When the National Science Foundation approached the College Board in 2008, there was an overall problem of participation in computer science, particularly among traditionally underrepresented students,” says Maureen Reyes, executive director of the College Board’s Advanced Placement program.

In 2007, 15,049 students had enrolled in an AP computer science course. But since last year, with the rollout of the new AP Computer Science Principles course, 104,849 students took the exam this year (including those students who took the AP Computer Science A course).

Not only that, but the students are more diverse: The number of African American and Latino students more than doubled; the number of females doubled.The number of African American students earning a 3 or higher on an AP computer science exam almost tripled in 2017 with the addition of the new Computer Science Principles course.

In 2005, 21 states had not even one African American student in an AP computer science course—that number is now down to five. Reyes told me that the course is the largest launch in AP history—and AP has been around for 60 years...

Laying a Foundation 
Jeffrey Lowenhaupt, a math teacher at the Bronx Center for Science and Mathematics in New York, taught the new course last year.

“It provides a really good perspective on how computer science fits into our daily lives. I never taught the old course, but speaking with some of the other teachers that did, it’s just pure coding. You learn Java and probably get a deeper understanding of one specific computing language, but this course engages the kids more because they learn about the internet, about social implications. They’re required to do research on emerging technologies, and there’s a lot more than just coding,” Lowenhaupt told me.
Read more... 

Source: Black Enterprise 

'Good Will Hunting' turns 20: 9 stories about the making of the film | ABC News

Photo: Lesley MesserLesley Messer, Entertainment Editor notes, "It's been 20 years since "Good Will Hunting" hit movie theaters and made household names of its screenwriters and stars, Ben Affleck and Matt Damon."

Ben Affleck and Matt Damon in "Good Will Hunting," 1997.
A critical darling, "Good Will Hunting" won two of the nine Oscars for which it was nominated, including one for Affleck and Damon and another for supporting actor Robin Williams -- and proved to be lucrative to boot.
According to Box Office Mojo, "Good Will Hunting" earned $225,933,435 worldwide and was ranked as the seventh most lucrative film of 1997.
However, despite the film's tremendous success, there are still things about its production that even the biggest "Good Will Hunting" fans might not know. 
1. The script began as a school project: Damon told Boston Magazine in 2013 that he began writing "Good Will Hunting" for a playwriting class he was taking at Harvard University. After the course ended, he asked his childhood friend Affleck to help him flesh out the story. "We came up with this idea of the brilliant kid and his townie friends, where he was special and the government wanted to get their mitts on him. And it had a very 'Beverly Hills Cop,' 'Midnight Run' sensibility, where the kids from Boston were giving the NSA the slip all the time," Affleck told the magazine. "We would improvise and drink like six or 12 beers or whatever and record it with a tape recorder. At the time we imagined the professor and the shrink would be Morgan Freeman and [Robert] De Niro, so we’d do our imitations of Freeman and De Niro. It was kind of hopelessly naive and probably really embarrassing in that respect." Damon said that the only scene that survived from his initial draft was the one in which his character, a math genius, meets his psychologist, played by Williams...
2. Will Hunting was originally a physics genius: At the suggestion of Harvard professor and Nobel Prize-winning physicist Sheldon Glashow, Damon's character, Will Hunting, became a mathematics genius instead. Glashow's brother-in-law, Massachusetts Institute of Technology professor Daniel Kleitman, went on to work with Damon and Affleck to ensure that the dialogue would be authentic. “When they asked me, ‘Can you speak math to us?’ my mouth froze,” he told MIT's website. “I felt silly mumbling random math so I found a postdoc, Tom Bohman. We went down to the old math lounge in Bldg. 2 and I gave a quick lecture. They took notes, but they didn’t really know what we were talking about.” Read more...  
Source: ABC News

Use machine learning to find energy materials | Nature.com - Comment

Photo: Edward SargentArtificial intelligence can speed up research into new photovoltaic, battery and carbon-capture materials, argue Edward Sargent, professor in the Department of Electrical and Computer Engineering, University of Toronto, Canada.

A solar module on display at an expo in Tokyo.
Photo: Yuriko Nakao/Reuters
The world needs more energy. Governments and companies are investing billions of dollars in technologies to harvest, convert and store power1. And as silicon solar cells approach the limit of their performance, researchers are looking to alternatives based on perovskites and quantum dots2. '

The batteries that store the energy must get cheaper, more efficient and longer-lasting3. And devices need to be manufactured from safe and abundant materials such as copper, nickel and carbon rather than from lead, platinum or gold. Life-cycle analyses of the materials need to show improved carbon footprints, as well as the ability to match the scale of the global energy challenge.

Enormous quantities of experimental data are being generated on the properties of such materials. The US National Institute of Standards and Technology, for example, hosts 65 databases, some with as many as 67,500 measurements. Also, since 2010, more than 1.7 million scientific papers have been published on batteries and solar cells alone.

Relating the structure of a material to its function needs accelerating. The search space is vast. Many materials are still found empirically: candidates are made and tested a few samples at a time. Searches are subject to human bias. Researchers often focus on a few combinations of the elements that they deem interesting. 

Computational methods are being developed that automatically generate structures and assess their electronic features and other properties4. The Materials Project, for instance, is using supercomputers to predict the properties of all known materials5. It currently lists predicted properties for more than 700,000 materials. But the tremendous potential to translate such data into industrial and commercial applications is still a long way from being realized. 

Machine learning — algorithms trained to find patterns in data sets — could greatly speed up the discovery of energy materials. It has already been used to predict the results of quantum simulations to identify potential molecules and materials for flow batteries, organic light-emitting diodes6, organic photo­voltaic cells and carbon dioxide conversion catalysts7. The algorithms can predict results in a few minutes, compared with the hundreds of hours it takes to run the simulations8

Challenges remain, however. There is no universal representation for encoding materials. Different applications require different properties, such as elemental composition, crystal structure and conductivity. Well-curated experimental data on materials are rare, and computational tests of hypotheses rely on assumptions and models that may be far from realistic under experimental conditions. 

The machine-learning and energy-sciences communities should collaborate more. They must understand each other’s capabilities and needs. We offer the following recommendations, which came out of a workshop run by the Canadian Institute for Advanced Research in May in Boston, Massachusetts...

WHAT NEXT
... more investment is needed in artificial intelligence and robotics-driven materials research throughout the world. More data must be made available to people programming the robots. And experimentalists, robotics experts and algorithm designers should communicate and collaborate more to facilitate rapid troubleshooting.

Time is running out to find the new energy technologies the world needs.
Read more... 

Supplementary Information
Supplementary information d41586-017-07820-6

Source: Nature.com

eLearning Trends for 2018 | Docebo

"2018 is coming – are you ready to take on the new year, head on? Read on to find out." inform Docebo.

Download the report.
The eLearning industry is in a constant state of flux – constantly adapting and changing as technology, user needs, and best practices change.

In recent years, we’ve seen significant budget allocation directed towards eLearning initiatives. When companies put their money where their mouths are, it’s a clear indication of the organizational priorities at play. As well, there has been a noted increase in eLearning initiatives in global geographic markets that align with a shift towards the prioritization of training and developments in markets, worldwide. 

Docebo writes in the EXECUTIVE SUMMARY, "The eLearning industry is never static. Just as technology continues to evolve, reinvent itself, and thrive (or die) in all technology-reliant industries, the same is true in eLearning specifically, and learning and development (L&D), broadly."
 
In eLearning in particular, we have seen evidence of this through shifting budget allocations for eLearning programs, the increasing prevalence of eLearning in different global geographic markets, new and emerging trends in technologies that support eLearning, and the everincreasing role of social learning as a key L&D priority. 

The global L&D industry is complicated, with many moving parts, disruptive technologies, and shifting priorities. That’s why we have developed eLearning Trends for 2018, a complete report outlining the global state of L&D – and eLearning in particular. For this comprehensive report, we have pored over metrics and insights pulled together by some of the leading analyst voices and organizations in the space. We have also collected a wide array of data and statistics that highlight how eLearning is changing and evolving.

Some of the key topics readers can look forward to in the pages ahead include:
  • An assessment of the size of the eLearning market globally, including budget allocations for eLearning purposes and the drivers of growth and development in the L&D industry. 
  • Emerging trends in the eLearning market, including social learning, mobile learning, microlearning, corporate MOOCs, and more. 
  • A look at potential “game-changers” and disruptive technologies and approaches to eLearning, including game-based learning, gamification, and wearable technology. 
  • Insights into the adoption and continued use of eLearning in geographical markets around the world, including considerations of what might be driving change and growth in these markets.
We hope this comprehensive report will be a great source to help learning and development professionals assess the global landscape of eLearning moving into 2018 and beyond.
Download the report.

Enjoy your reading!

Source: Docebo  

Trends in Learning Report 2017 | Business - The Open University

The Open University’s (OU) Institute of Educational Technology (IET) is at the forefront of identifying and developing new ways to enhance learning through technology. 

Download the report
Each year, we analyse the latest innovations in teaching, learning and assessment that are shaping the education landscape. In the annual Trends in Learning report, we explore the implications of those innovations for workplace L&D.

This year’s report focuses on six critical learning trends:
learning through social media, productive failure, design thinking, learning from the crowd, formative analytics and learning for the future. 

Download the report and refresh your learning and development plan using the tips we’ve provided. 

Recommended Reading
 
Photo: The Open University
7 amazing things digital wearable devices are helping us do by Kath Middleditch, works in the Media Relations team within the Communications Unit at The Open University.  

"For the past 15 years Professor Blaine Price has sported every smartwatch and digital health wearable device imaginable, earning him the nickname of ‘Inspector Gadget’ at home." 

Enjoy the read.

Source: The Open University

Reaching All Learners by Leveraging Universal Design for Learning in Online Courses | EDUCAUSE Review

Key Takeaways
  • An instructional design team at the University of Memphis focused on helping faculty create inclusive online classrooms, become aware of the diversity of their students' learning needs, and adapt their instruction to reach all learners.
  • They did this by helping faculty employ the principles and guidelines of the Universal Design for Learning framework, which consists of three principles: Multiple Means of Engagement, Multiple Means of Representation, and Multiple Means of Action and Expression.
  • After two years, the UDL Implementation Plan, with its emphasis on experimentation, exploration, and inclusive instruction, yielded significant benefits for instructional effectiveness at the University of Memphis.
Check out this interesting article from EDUCAUSE Review, published by Roy Bowery and Leonia Houston below. 

Photo: The Center for Innovative Teaching and Learning"The Universal Design for Learning Implementation Plan, with its emphasis on experimentation, exploration, and inclusive instruction, yielded significant benefits for instructional effectiveness at the University of Memphis. Learn how they did it."   

An asynchronous online learning environment invariably includes learners who simply do not connect with the instruction. This method of learning can seem confusing and often isolating for those electing to obtain their degrees completely online. In the past five years, such learners at the University of Memphis have typically been adult students between the ages of 25 and 65 (post-traditional students), working full-time and returning to school to complete their first or second degree. Our institution's success rates for fully online program courses have historically been 8–10 percent lower than face-to-face, traditional format courses. Here we define "success" as the course final grade average exceeding 70 percent. When reviewing the factors affecting learners in this category, we perceived that some challenged learners who experienced difficulty in their online courses were more likely to fall behind without requesting support, while other learners found ways to excel in the same environment. To address the needs of students in the challenged demographic, our unit — the UM3D Instructional Impact team — focused on helping faculty create inclusive online classrooms, become aware of the diversity of their students' learning needs, and adapt their instruction to reach all learners.

In an effort to bridge the success gap, our team focused on helping faculty employ the principles and guidelines of the Universal Design for Learning (UDL) framework. According to the National Center on Universal Design for Learning, the UDL framework consists of three principles: Multiple Means of Engagement, Multiple Means of Representation, and Multiple Means of Action and Expression.1 The principles within the framework focus on the what, how, and why of learning. Each of these key principles helped our faculty address learner variability and include guidelines for encouraging their learners to become more motivated, resourceful, and goal-directed. By incorporating the UDL principles and guidelines into their online program courses, faculty created inclusive learning environments and addressed learner variability. With their newfound skills, most could use the strategies within the framework to design and develop online courses with flexible goals, instructional methods, materials, and assessments.

To assist faculty, we created a UDL Implementation Plan designed to teach them how to gradually incorporate UDL principles into their online classrooms, address learner variability, and create inclusive online instruction. We could customize the framework to meet every course, faculty, or instructional need, and they did not have to follow the principles and guidelines within the framework in a specific order. Instead, faculty could identify instructional methods or assignments affecting success in their course(s) and use specific UDL principles or guidelines to solve their pedagogical issues.
 
UDL Implementation Plan
In the fall semester of 2015, our unit set out to complete a campus-wide implementation of the UDL framework for all online courses and programs. Our institution has approximately 65 fully online programs that include more than 600 fully online asynchronous offerings. To serve the faculty responsible for designing, developing, and delivering our online programs and courses, we focused on determining paths and plans for support that meet instructional needs and move us toward scaling and optimizing a full UDL implementation...

Roy Bowery and Leonia Houston writes in the conclusion, "The UDL Implementation Plan process, with its emphasis on experimentation, exploration, and inclusive instruction, yielded significant benefits for instructional effectiveness at the University of Memphis. After two years, we can encourage faculty to resist formulaic instruction and move to make learning accessible to all. Because of this, the university administration has encouraged our team to pursue the integration process with the remainder of our online programs and courses. As we continue the integration of UDL, we will work on the next phase of the process and determine how to scale this implementation to other methods of course delivery."
Read more...  

Source: EDUCAUSE Review

Higher Education, Digital Divides, and a Balkanized Internet | EDUCAUSE Review

New and old digital divides are Balkanizing the Internet, threatening to split apart not only students but also communities. This constitutes one of the most important issues confronting the U.S. higher education technology community.
Photo: Bryan Alexander"Most of today's educational technology depends on users having Internet access" says Bryan Alexander, futurist, researcher, writer, speaker, consultant, and teacher.


Students, staff, and faculty must be online in order to participate in learning management systems, digital tests, student information systems, licensed databases, and the entire web. They not only must be reliably online but also need to do so through high-speed connectivity. The digitally networked world is increasingly predicated on users having broadband access. 

Unfortunately, Internet access has remained deeply uneven and unequally distributed in the United States.1 This has serious implications for higher education. Inequitable digital connections can warp access to learning, which in turn can help drive and escalate social inequality. Indeed, the "new" digital divides — which create a Balkanized Internet — may constitute one of the most important issues confronting the U.S. higher education technology community. 

A Short History of Digital Divides 
Uneven Internet access is not a new problem. It has been an issue since the invention of the Internet in the late 1960s. With the inception of the U.S. Defense Department's Advanced Research Projects Agency Network (ARPANET) in 1969, the number of computers, modems, connections, and nodes grew slowly through the 1970s and 1980s. Owning or otherwise having access to a networked computer was by no means ubiquitous. Although the burgeoning networked ecosystem gradually, then more rapidly, increased opportunities for access, those opportunities depended on who had access to the right combination of hardware, networking, and software. As connection speeds began to advance past dialup, they too were unevenly distributed, as per science fiction writer William Gibson's famously cited observation that the future is already here — it's just not evenly distributed yet.2

By the 1990s, the importance and size of the Internet and its new face, the World Wide Web, became popularly recognized, as did inequalities of access. Accordingly, the United States took steps to identify and mitigate what many were referring to as the digital divide by kicking off a generation of research, activism, policy development, and practice. Under the Clinton administration, federal and state government initiatives joined with nonprofits and businesses to expand Internet access across multiple fronts. The E-Rate program of 1996, for example, compelled telecommunications companies to divert resources in order to link public schools to the burgeoning Internet.

Efforts to address the digital divide continued in the first two decades of the 21st century, with the advent of programs such as One Laptop per Child and state-driven broadband initiatives. Meanwhile, Internet technology continued to change. Mobile phone access came belatedly to the United States after connecting much of the rest of the world, since America had both excellent landline phone service and more Internet-connected computers than most other nations. But once it came, the cell phone revolution offered an alternative to landlines, fiber, and cable boxes. Maximum Internet speeds grew, partly through competition between Internet service providers (ISPs) and also due to research and development, with Internet2 serving as an advanced outlier. Public libraries became community Internet anchors, as librarians not only provided computers, networks, and software but also offered the widest possible range of user training and support. More and more of education, work, and life migrated online, especially once social media took off in popularity and usage. Richer media that required more bandwidth became increasingly popular: animated images, sound files (music and podcasts), streaming video, videoconferencing and webinars, software updates and downloads, and gaming. And yet, broadband remained less than ubiquitous throughout the 21st century. By May 2013, to pick one data point, only 70 percent of households had high-speed broadband3 — and "high-speed" was defined at a lower speed than what we expect now, in 2017.

The Current Digital Divides 
Where does the Internet access gap stand now, at the end of 2017? We can look back on these historical transformations and see that Internet access inequalities have altered in some ways while persisting in others. The concept continues to deeply determine our Internet experience, dividing it into uneven strata of user access and capacity.4
Most of the forces that drive uneven Internet access have been at work for decades. To begin with, wealth and education often positively correlate with higher broadband use, as the more affluent and/or educated a family is, the more likely it is to have broadband at home and work. This makes intuitive sense when we think of the costs of laptop and desktop computers and of the greater budgets of schools in wealthier districts. Poorer students have less access to computer science offerings, from classes to afterschool clubs. In addition, higher levels of educational attainment increase one's likelihood of learning digital skills, as well as one's chance of working in a field heavily dependent on the networked world.5

Wealth can drive familiarity with computation even more strongly than generational differences, as media scholar Siva Vaidhyanathan argued nearly ten years ago. Living in a poor or working-class economic stratum can lead to reduced access in a variety of ways, from inferior equipment to filtering. Poverty can remove urban residents from the relatively plentiful broadband networks that cities host. And ISPs may already be discriminating in speed offerings based on poverty, according to recent complaints to the Federal Communications Commission (FCC).6

Racial inequalities also shape access. Blacks, Latinos, and Native Americans continue to lag whites and Asian-Americans in home broadband speeds and access. At least partially in compensation, the former are more likely to use cell phones for connectivity. This may constitute a digital version of the 20th-century real estate practice of redlining: restricting certain populations from access to desired locations. Race is also tied in to the earlier mentioned economic issues, as blacks and Latinos generally have lower incomes and lower savings than do whites.7 As D. Amari Jackson has observed:
The good news? Your daughter's school has been designated an "Apple Distinguished School" and, as such, she and all of her peers will receive brand new iPads for their individual usage. The bad news? Once your daughter leaves school, she can't use it — at least not at home. For you live in a lower-income neighborhood without access to Internet or a fast-enough connection to take advantage of her shiny new toy.8Though wealth is likely a stronger factor, age is another correlate with Internet access: the older an American is, the less likely he/she is to have a speedy connection and the more likely he/she is to use the Internet for less time.  
Read more...

Recommended Reading
 

Provosts, Pedagogy, and Digital Learning by Kenneth C. (Casey) Green, Director of The Campus Computing Project, Director of the ACAO Digital Fellows Program, and the moderator of TO A DEGREE, the postsecondary success podcast of the Bill & Melinda Gates Foundation, Charles Cook, Executive Vice President and CAO at Austin Community College (ACC), Laura Niesen de Abruna, PI on the Bill & Melinda Gates Foundation grant that created the ACAO Digital Fellows Program, is Provost and CAO at York College of Pennsylvania and Patricia L. Rogers, Executive Vice President and CAO at Winona State University.

"Panel members from an EDUCAUSE 2017 Annual Conference session offer insights about the role of provosts and chief academic officers in digital courseware deployment and the challenges of using technology to advance teaching, learning, and student success."

Source: EDUCAUSE Review

If You Can Solve This Math Puzzle, You May Be a Genius | Reader's Digest

Photo: Marissa Laliberte
"It's not as simple as it looks" summarizes Marissa Laliberte, Staff Writer at Reader's Digest.
Photo: Reader's Digest
Do you think of yourself as a secret mathematician? This math brainteaser will have even the biggest number nerds scratching their heads.

People’s Daily,China tweeted out this math puzzle in which each picture represents a number.

People's Daily,China @PDChina - 30 Nov 2017
These algebra problems might seem easy at first glance, but hold on. People’s Daily was nice enough to give away the answer before you began. If you didn’t get 16, you did something wrong. Take a closer look at the pictures—you probably missed a few key details. (You’d need to look closely to solve this easy math problem that baffled the Internet, too.)
Read more...

Source: Reader's Digest

Changes OK’d for math, science | hngnews.com

"The Milton Board of Education approved curriculum changes, focusing largely on math and science programming and sequencing, during a regular meeting held on Nov. 13." inform Kim McDarison, Correspondent, Milton Courier.

The changes, as presented to the board by curriculum committee and school board member Shelly Crull-Hanke, had been previously outlined for committee members by the district’s Director of Curriculum and Instruction Heather Slosarek on Nov. 8.

During the committee meeting, Crull-Hanke said, an extensive PowerPoint presentation had been shared by Slosarek, including a summary of programming designed to bring district curriculum into better alignment with Common Core State Standards (CCSS) and College and Career Readiness Standards (CCRS) as initially adopted by the Milton School Board during the 2015-2016 school year, and changes made to better align science curriculum with new state requirements as well as Next Generation Science Standards (NGSS), also adopted by the district during the 2015-2016 school year
During a follow-up interview, Slosarek said: "We are working on alignment to improve growth scores." During a state report card presentation given to the board on Nov. 27, Slosarek noted that curriculum changes would also help with the district’s ongoing initiative to "close gaps," a report card category designed to measure the district’s ability to bring certain state-identified demographically similar groups’ performance scores more in line with those earned by the overall population.
From the reportSlosarek’s PowerPoint focused upon three topics, including a 2017-2018 curriculum implementation update, a "science scope and sequence overview," and new course proposals for the high school.
Curriculum updateUnder 2017-2018 Curriculum Update, the report discussed the "implementation of past and new curriculum," with implementation defined as teacher support provided by prepackaged course materials publishers and company trained district staff members working to further support district staff, and was broken into three categories: Literacy, Math and Science.
LiteracyDefining literacy as English language arts (ELA), reading and writing, current implementation (teacher support) practices and pilot program initiatives across the K-12 continuum, were outlined.
In elementary and intermediate grades K-6, teachers were being supported through a three-year implementation process while using the Jan Richardson’s Guided Reading Framework designed to help teachers "provide powerful small-group literacy instruction," the program’s website states.
A one-year implementation process was also underway to support teachers using the Lucy Calkins’ Unit of Study in Writing, designed to develop skills within the narrative, information and persuasive writing domains... 

MathImplementation strategies used for math were outlined within three grade level categories, beginning with elementary and intermediate grades K-5. Teachers using Houghton Mifflin Harcourt Math Expressions Curriculum within grades K-5 would be supported for three years, the report stated.
Teachers using the 2018 edition of Math Expressions, focusing on "further differentiation and tech" within the grade category would receive one year’s worth of implementation support.
In the middle and intermediate grades 6-8, teachers using the Big Ideas math curriculum would receive four years of support. The program will include two strands (or curriculum pathways): math, grades 6-8, and enriched math in grades 6-7, followed by eighth-grade algebra.
ScienceSchigur defined NGSS as national standards that are supported by the Department of Public Instruction (DPI) and embraced by most area school districts. The district is working to adopt prepackaged materials that are aligned with NGSS, he said.
Within the science curriculum presentation, grades K-3 and 4-5, were broken out into separate blocks. Teachers of K-3 students were experiencing their second of a two-year implementation process using NGSS-aligned quarterly science units, which further aligned with literacy programming.
No new curriculum would be implemented within grades 4 and 5 as teachers worked to determine an appropriate transition, the report stated.Read more...
Source: hngnews.com

28 Top Business Books to Get Ahead In 2018 | Entrepreneur

Photo:  John Rampton"Reading is both a leisure activity and a strategically important self improvement strategy" notes John Rampton, entrepreneur, investor, online marketing guru and startup enthusiast. 

Photo: Westend61 | Getty Images
Reading has been noted as one of the primary habits of ultra-successful people, with magnates like Warren Buffett reading hundreds of pages each day and Bill Gates consuming numerous books each year.

Scientific evidence also points to reading as a way to significantly improve in terms of health and wellness factors such as mental acuity, stress levels, sleep quality, empathy, and positivity. Over the past year I’ve made it a habit of continual reading. Whether it be books on this list who I’ve come to love and admire or new books that I’ve gotten a sneak peak, each one will help you grow.

Pick up some great business books to ensure that 2018 starts correctly and stays successful. Here are 28 business books to add to your tablet or nightstand:

1. "Outside Insight: Navigating a World Drowning in Data," by Jørn Lyseggen.Conceptualized by author Jørn Lyseggen, who also serves as CEO of media intelligence company Meltwater, Outside Insight shifts company leaders’ focuses away from historical internal data and toward external information.

When leaders do this, they will become able to benchmark against the competition; make more strategic, forward-thinking decisions; and stay one step ahead of competitors in 2018.

Related: How Do Your Reading Habits Compare to Elon Musk's, Mark Zuckberg's and Warren Buffett's?

2. "Hug Your Haters," by Jay Baer. 
"Hug Your Haters" by Jay Baer is the first-ever customer service book for modern times, bringing to life the realities of customer expectations in the social media era.

Baer wants to help readers move on from legacy forms of customer service like telephone and email and move into 2018 armed with methods for delivering on customer expectations — all in the public’s view online. The book also covers how to embrace complaints, turn bad news into great news, and transform haters into ambassadors for a brand.

3. “Superconnector: Stop Networking and Start Building Business Relationships That Matter,” by Scott Gerber and Ryan Paugh. 
Superconnector” is the next stage in the evolution of networking, thanks to changes shaped by social media and the idea of social capital. The “superconnector” set of habits is where networkers should focus their energies, as superconnecting is about truly understanding the power that comes from building certain relationships, including how putting certain people together amplifies success and leads to innovative solutions...

So much to read, so much time. Yes, this is a lot to read, but there’s time to cover all of these business books and more. That's because you will make time. This is an investment in yourself and your business. As such, you generate a sizable return from the advice and knowledge from these pages and great minds.
Read more... 

Source: Entrepreneur

How to get those top AI jobs | The Hindu - Careers

"There is already a demand for professionals who can wield business tools powered by Artificial Intelligence" says Ketan Kapoor, CEO and Co-founder, Mettl. 

Photo: The Hindu

With the rapid rise in use of innovative digital technologies like Artificial Intelligence (AI) to meet evolving business needs of organisations across industries, more than 30 percent of the global workforce must equip itself with new skills to be effective in their roles.
Despite the pervasive worry about AI making several tech professionals and functions redundant, a new report by Gartner, an American research and advisory firm, states that Artificial Intelligence will create 2.3 million jobs by 2020, while a report by Capgemini revealed that 83% of companies employing AI-powered tools have already been creating new jobs with the help of this technology. Here are some of the top AI-powered jobs that are most in demand at present and will see further growth in the future:

Data scientist The role of the data scientist in modern businesses is accentuated by the rapid development of cutting-edge analytics tools and technologies. Data scientists, with their advanced skillsets in statistics and analytics, can sort through large volumes of data easily and organise it to extract valuable insights that can help businesses take the right decisions and optimise some of the key organisational processes. 
Read more...

Source: The Hindu

Machine Learning: A Guide for Non-Technical Readers | insideBIGDATA - Machine Learning

Photo: insideBIGDATA"Machine learning has become a water-cooler topic across industries... Download the new report from Dataiku that offers a guide to machine learning basics for non-technical readers" inform Sarah Rubenoff, Contract Writer & Editor. 

Download the Full Report.Machine learning has become a water-cooler topic across industries. And the chatter about the possibilities of AI and deep learning certainly isn’t slowing down anytime soon.

In an effort to reach non-technical readers, Dataiku has released a new report that offers a guide to machine learning basics. And no, you don’t have to be an AI expert to understand the content—complete with easy-to-understand diagrams and illustrations.

The report, “Machine Learning Basics: An Illustrated Guide for Non-Technical Readers,” starts out by exploring definitions of basic machine learning terms, including the topic itself. What is machine learning? Dataiku says it can be boiled down to one word: Algorithms.

To fully understand machine learning, one must have a basic understanding of data science concepts, as well. Next up, the report offers definitions of 10 fundamental terms for data science and machine learning. Think model, regression, classification and more.

Many businesses today use machine learning through tools such as prediction algorithms, of which the guide explores the most popular: linear models, tree-based models and neural networks...

The full report from Dataiku covered the following topics:
  • Machine Learning Concepts for Everyone
  • An Introduction to Key Data Science Concepts
  • Top Prediction Algorithms
  • How to Evaluate Models
  • Introducing the K-Fold Strategy and the Hold-Out Strategy
  • K-Means Clustering Algorithms in Action
  • For Further Exploration
  • About Dataiku
Read more... 

Recommended Reading

Photo: insideBIGDATA
10 Tips for Building Effective Machine Learning Models
"In this contributed article, Wayne Thompson, Chief Data Scientist at SAS, provides 10 tips for organizations who want to use machine learning more effectively."

Source: insideBIGDATA    

Artificial intelligence disrupts assisted living facilities | Digital Journal - Health

Photo: Tim Sandle"The care home and assisted living concepts are altering through the use of artificial intelligence; this is leading to a safer environment for elderly people and others who require assisted support" reports Dr. Tim Sandle, Digital Journal's Editor-at-Large for science news.

Families who cannot devote the required attention a loved one deserves may decide that an assisted living facility is the best way to provide supervision, medical care, and social interactions for the beloved individual.
Photo: Carol Forsloff
Healthcare worker would made a twice or four-times a day visit; coming in is a room equipped with sensors and cameras; elderly people wearing monitoring wristbands; the deployment of biomimetic companion robots, each sending vital signs to a data hub which is interpreted by artificial intelligence.

One example of an artificial intelligence driven assisted living platform is Caremerge, which is a digital healthcare startup. The company offers a care coordination platform designed for senior living communities. On the other side, appropriate software can also determine if an ‘alert’ is genuine or false. 

Remote reporting
Caremerge, according to Forbes, provides services like motion detectors that can alert staff if an elderly person was to fall over. A further functionality of the service is the ability to help to keep families up to date on the services provided by the care home through remote reporting. 

Voice activated apps
Another service is voice-activated devices and smartphone apps. These enable care staff to receive guidance from health professionals. Such devices can also help to automate the wages and expenses of staff by performing functions like automatically logging mileage.
Read more...

Source: Digital Journal

The four industries making best use of artificial intelligence | The Australian Financial Review - Leadership

Photo: Nick Deeks
"Professional services businesses need to follow the lead of other industries if they want to head-off disruption" says Nick Deeks, managing director of property, project and cost management consultants WT Partnership.


Facial recognition is a example of artificial intelligence at work.
Photo: Shutterstock.com
Was this the legal sector's "Kodak moment"? The event that signalled the beginning of the end: "The people of Darwin can just about take the law into their own hands, with a new legal firm going lawyer-free," ABC News reported recently. "With a few clicks of a button, a client can enter their details and will then be asked a few simple questions by Ailira, before the robot generates a fully certified will, using the Ailira system."

Right now, Australia has only a handful of businesses that have successfully integrated artificial intelligence into their day-to-day operations. But each month we are witnessing advancements and seeing early adopters reap the benefits.

Artificial intelligence in professional services may seem a vision reserved for futurists and it's a change most people in the industry are in denial about, however, you should be planning to incorporate AI in your workplace in the next six to 12 months, if you are not introducing it already, or face the same fate as Kodak (the camera and photographic film maker, founded in 1888, filed for bankruptcy in 2012 after failing to address the disruptive effects of digital photography).

AI has been slow to disrupt professional services, unlike the speed at which blue-collar industries are being disrupted by robotics: think of manufacturing, the car industry and bricklaying (watch out for Fastbrick Robotics). 

CEOs fearful
Australian professional services businesses need to get on the front foot when it comes to introducing and embedding AI. The service sector represents 70 per cent of Australia's gross domestic product. Professional services, one of our top five service exports, brought in $5.2 billion in 2015-16.

It is understandable that many CEOs are fearful or have yet to be convinced about the benefits of AI. It is also true that AIs will need to have greater capacity to handle complexity than they do currently, to convince Australian companies of the business case.
Read more... 

Source: The Australian Financial Review

Pagine