Aggregatore di feed

Artificial Intelligence Still Isn't a Game Changer | Bloomberg - Tech

Photo: Leonid Bershidsky"Machines can beat humans at some things, but they remain one-trick ponies" insist Leonid Bershidsky, Bloomberg View columnist.

Man vs. machine.
Photo: Oleksandr Rupeta/NurPhoto via Getty Images
Not much time passes these days between so-called major advancements in artificial intelligence. Yet researchers are not much closer than they were decades ago to the big goal: actually replicating human intelligence. That’s the most surprising revelation by a team of eminent scholars who just released the first in what is meant to be a series of annual reports on the state of AI.

The report is a great opportunity to finally recognize that the current methods we now know as AI and deep learning do not qualify as "intelligent." They are based on the "brute force" of computers and limited by the quantity and quality of available training data. Many experts agree.

The steering committee of "AI Index, November 2017" includes Stanford's Yoav Shoham and Massachusetts Institute of Technology's Eric Brynjolfsson, an eloquent writer who did much to promote the modern-day orthodoxy that machines will soon displace people in many professions. The team behind the effort tracked the activity around AI in recent years and found thousands of published papers (18,664 in 2016), hundreds of venture capital-backed companies (743 in July, 2017) and tens of thousands of job postings. It's a vibrant academic field and an equally dynamic market (the number of U.S. start-ups in it has increased by a factor of 14 since 2000).

All this concentrated effort cannot help but produce results. According to the AI Index, the best systems surpassed human performance in image detection in 2014 and are on their way to 100 percent results. Error rates in labeling images ("this is a dog with a tennis ball") have fallen to less than 2.5 percent from 28.5 percent in 2010. Machines have matched humans when it comes to recognizing speech in a telephone conversation and are getting close to at parsing the structure of sentences, finding answers to questions within a document and translating news stories from German into English. They have also learned to beat humans at poker and Pac-Man. But, the authors of the index wrote: 

Tasks for AI systems are often framed in narrow contexts for the sake of making progress on a specific problem or application. While machines may exhibit stellar performance on a certain task, performance may degrade dramatically if the task is modified even slightly. For example, a human who can read Chinese characters would likely understand Chinese speech, know something about Chinese culture and even make good recommendations at Chinese restaurants. In contrast, very different AI systems would be needed for each of these tasks.

The AI systems are such one-trick ponies because they're designed to be trained on specific, diverse, huge datasets. It could be argued that they still exist within philosopher John Searle's "Chinese Room." In that thought experiment, Searle, who doesn't speak Chinese, is alone in a room with a set of instructions, in English, on correlating sets of Chinese characters with other sets of Chinese characters. Chinese speakers are sliding notes in Chinese under the door, and Searle pushes his own notes back, following the instructions. They can be fooled into thinking his replies are intelligent, but that's not really the case. Searle devised the "Chinese Room" argument -- to which there have been dozens of replies and attempted rebuttals -- in 1980. But modern AI is still working in a way that fits his description.

Machine translation is one example. Google Translate, which has drastically improved since it started using neural networks, trains the networks on billions of lines of parallel text in different languages, translated by humans...

...up to us to keep this branch of computer science in its place by only giving it as much data as we're comfortable handing over -- and only using it for those applications in which it can't produce dangerously wrong results if fed lots of garbage.  

Source: Bloomberg 

Want to learn more about earthquakes? | Redlands Daily Facts

Check out these new books at the A.K. Smiley Public Library by Anna Pearson, staff of the A.K. Smiley Public Library in Redlands.

For Californians, it is always in the back of our minds that an earthquake might happen today, so our interest in the subject is high. The A.K. Smiley Public Library has three new books that will help us to understand this amazing and dangerous phenomenon. 

‘The Great Quake’ 

The Great Quake:
How the Biggest Earthquake in
North America Changed Our Understanding
of the PlanetHenry Fountain has written an engrossing narrative of the 1964 Alaska earthquake in “The Great Quake: How the Biggest Earthquake in North America Changed our Understanding of the Planet.” He follows geologist George Plafker as he immediately surveys the destruction of the town of Valdez and the incredible damage to the landscape, harbors, roads and buildings throughout south-central Alaska.

The quake registered as 9.2 and shook violently for five minutes, but most of the 139 people who died were killed by the resulting tsunamis, including 16 deaths in California and Oregon. Fountain tells the stories of people as they tried to outrun the waves or escape over ground that was rising and falling and opening in deep cracks...


On the Road to America's
Next Devastating EarthquakeKathryn Miles expands the study of earthquakes in “Quakeland: On the Road to America’s Next Devastating Earthquake.” Our planet experiences about 1,000 earthquakes every day but we still do not understand when and where the next one will be.
We are not so surprised at a quake in California, which has a long and recent history of events, but in 2011, a 5.8 quake, the largest east of the Mississippi since 1897, rocked Virginia.
Intrigued that the country has 2,100 mapped faults across every state, Miles was curious to see if anyone was paying attention to the threat of earthquakes. She visited dams, nuclear plants, mines and the tunnels under Manhattan and talked with their managers and engineers to see how aware they were of the danger...

‘Earthquake Prediction’

Earthquake Prediction:
Dawn of the New Seismology“Earthquake Prediction: Dawn of the New Seismology” is precisely what David Nabhan’s new book is about. Nabhan’s interest in the subject was inspired by his own experience of earthquakes in California beginning in 1987.
He noticed that they happened near the same times of day. He realized that the eight greatest earthquakes in California in the past 61 years had all occurred around dawn or dusk.

Source: Redlands Daily Facts and Amazon

Review Or Two Books: 'Dawn Of The New Everything' And 'Hacking Of The American Mind' | Seeking Alpha

  • Silicon Valley scientist Jaron Lanier turns philosophical guru in "Dawn of the New Everything."
  • Pediatrician and former public health official Robert Lustig describes human addiction to electronic devices, advertising, drugs, sugar and processed foods.
  • These two books take deep dives into the social costs of Silicon Valley and the marketing of the "consumer industrial complex."

"Check out these two books: 'Dawn Of The New Everything' and 'Hacking Of The American Mind' below" continues Seeking Alpha.

Photo: Hazel Henderson"These two books are engrossing, enlightening and call for a re-design of these large sections of the US economy" says Hazel Henderson D.Sc.Hon., FRSA, founder of Ethical Markets Media, LLC and producer of its TV series. 

In "Dawn of the New Everything," computer scientist / musician Jaron Lanier reveals through this biographical account of his innovative startup VPL Research his deep philosophical approach to the digital revolution. Lanier goes beyond his "Who Owns The Future", (2014) and its insightful understanding of the shortcomings of Silicon Valley's social media business models. He skewers the contradictions inherent in Facebook (FB), Google (GOOGL), Twitter (TWTR) and similar business models based on advertising and selling their users' personal information. Lanier proposes that every single bit and pixel uploaded by users be paid for, which is feasible with existing software. 
Dawn of the New Everything:
Encounters with Reality and Virtual RealityIn "Dawn of the New Everything" he amplifies these critiques of digital age companies in similar terms to the 2017 Congressional hearings...

In the "The Hacking of the American Mind", Dr. Robert Lustig, Professor of pediatrics at the University of California and former US Public Health official goes even deeper than Lanier. Lustig describes in anatomical detail how addictive technologies from phones, tablets and TV to marketing, branding and advertising affect the human brain. Lustig describes the brain's neural pathways and their responses via hormonal changes in dopamine, serotonin, cortisol, testosterone and oxytocin. He describes how addictive substances including those in foods, can cause dependency and how humans can change and distort our behavior in response.

The Hacking of the American Mind:
The Science Behind the Corporate
Takeover of Our Bodies and Brains
Dr. Lustig's first foray into these critiques of mass industrial marketing is his best seller, "Fat Chance" (2013) which examines sugar, salt and fats as addictive substances and indicates sugar as the worst for human health. In "The Hacking of the American Mind, Lusting extends this challenge to the entire industrial food sector and how it has depleted the nutritional value of most packaged, canned and processed foods.Read more...
Source: Seeking Alpha and Amazon

New Book Provides Concrete Strategies to Help Schools Accelerate Teacher Growth with Video Coaching | Markets Insider

"A new book aims to help educators successfully make and implement a plan for using video-based learning as part of the classroom observation and professional development process" inform Markets Insider.

Evidence of Practice:
Playbook for Video-Powered Professional Learning
"Evidence of Practice: Playbook for Video-Powered Professional Learning" offers 12 strategies for video coaching – including Video Learning Communities (VLCs), Virtual Walk-through, and Online Lesson Study – that readers can implement in their own districts, schools, or classrooms.

"This book fills a sorely needed gap in professional practice literature when it comes to best practices for video coaching, and it answers the important questions any educator might have when using video to improve teaching in the classroom," said Cary Goldweber, executive producer of digital products at ASCD. "It is a nice balance between theory, practice, and application."

"Evidence of Practice" draws from researcher and practitioner advice to provide a practical and implementation-focused guide immediately useful to educators. Authored by Edthena founder Adam Geller along with Annie Lewis O'Donnell, the book also includes an afterword by professional development author and instructional coaching expert Jim Knight.

"Teachers, coaches, and administrators often believe video observation and video coaching can be a high-impact tool for accelerating teacher growth, but the gap between a good idea and execution can feel large," said Geller. "The new playbook bridges this gap by providing educators with practical, concrete steps to implement and use video coaching with fidelity."

The book also covers the research basis for putting video evidence at the center of professional learning, focusing techniques for analyzing video of classrooms, and tactical guidance about recording and sharing teaching videos.
Read more... 

About the Authors
Adam Geller is the founder of Edthena. He started his career in education as a science teacher in St. Louis, Missouri. Since 2011, Adam has overseen the evolution of Edthena from a paper-based prototype into a research-informed and patented platform used by schools, districts, teacher training programs, and professional development providers. Adam has written on education technology topics for various publications including Education Week, Forbes, and edSurge, and he has been an invited speaker about education technology and teacher training for conferences at home and abroad.

Annie Lewis O'Donnell is an independent educational consultant. She works with intentionally diverse school communities that strive to serve all children equitably and with organizations that train and develop teachers. O'Donnell began her career in education as a second-grade teacher at a public school in Baltimore, Maryland. For more than twelve years, she led national design teams at Teach For America, overseeing pre-service teacher preparation and ongoing in-service support.

Jim Knight is a research associate at University of Kansas Center for Research on Learning, senior partner of the Instructional Coaching Group, and president of Impact Research Lab. He has spent more than a decade studying instructional coaching and has written several books on the topic.

Source: Markets Insider

Five ways to fix statistics |

"As debate rumbles on about how and how much poor statistics is to blame for poor reproducibility, Nature asked influential statisticians to recommend one change to improve science. The common theme? The problem is not our maths, but ourselves" according to

Photo: David Parkins
To use statistics well, researchers must study how scientists analyse and interpret data and then apply that information to prevent cognitive mistakes. 

In the past couple of decades, many fields have shifted from data sets with a dozen measurements to data sets with millions. Methods that were developed for a world with sparse and hard-to-collect information have been jury-rigged to handle bigger, more-diverse and more-complex data sets. No wonder the literature is now full of papers that use outdated statistics, misapply statistical tests and misinterpret results. The application of P values to determine whether an analysis is interesting is just one of the most visible of many shortcomings. 

It’s not enough to blame a surfeit of data and a lack of training in analysis1. It’s also impractical to say that statistical metrics such as P values should not be used to make decisions. Sometimes a decision (editorial or funding, say) must be made, and clear guidelines are useful. 

The root problem is that we know very little about how people analyse and process information. An illustrative exception is graphs. Experiments show that people struggle to compare angles in pie charts yet breeze through comparative lengths and heights in bar charts2. The move from pies to bars has brought better understanding.

We need to appreciate that data analysis is not purely computational and algorithmic — it is a human behaviour. In this case, the behaviour is made worse by training that was developed for a data-poor era. This framing will enable us to address practical problems. For instance, how do we reduce the number of choices an analyst has to make without missing key features in a data set? How do we help researchers to explore data without introducing bias? 

The first step is to observe: what do people do now, and how do they report it? My colleagues and I are doing this and taking the next step: running controlled experiments on how people handle specific analytical challenges in our massive online open courses3

We need more observational studies and randomized trials — more epidemiology on how people collect, manipulate, analyse, communicate and consume data. We can then use this evidence to improve training programmes for researchers and the public. As cheap, abundant and noisy data inundate analyses, this is our only hope for robust information. 


What Do the AI Chips in New Smartphones Actually Do? | Gizmodo - Field Guide

"Artificial intelligence is coming to your phone" notes David Nield, Contributor.

Photo: Huawei
The iPhone X has a Neural Engine as part of its A11 Bionic chip; the Huawei Kiri 970 chip has what’s called a Neural Processing Unit or NPU on it; and the Pixel 2 has a secret AI-powered imaging chip that just got activated. So what exactly are these next-gen chips designed to do?

As mobile chipsets have grown smaller and more sophisticated, they’ve started to take on more jobs and more different kinds of jobs. Case in point, integrated graphics—GPUs now sit alongside CPUs at the heart of high-end smartphones, handling all the heavy lifting for the visuals so the main processor can take a breather or get busy with something else.

The new breed of AI chips are very similar—only this time the designated tasks are recognizing pictures of your pets rather than rendering photo-realistic FPS backgrounds.

What we talk about when we talk about AI 
AI, or artificial intelligence, means just that. The scope of the term tends to shift and evolve over time, but broadly speaking it’s anything where a machine can show human-style thought and reasoning.

A person hidden behind a screen operating levers on a mechanical robot is artificial intelligence in the broadest sense—of course today’s AI is way beyond that, but having a programmer code responses into a computer system is just a more advanced version of getting the same end result (a robot that acts like a human).

As for computer science and the smartphones in your pocket, here AI tends to be more narrowly defined. In particular it usually involves machine learning, the ability for a system to learn outside of its original programming, and deep learning, which is a type of machine learning that tries to mimic the human brain with many layers of computation. Those layers are called neural networks, based on the neural networks inside our heads.

So machine learning might be able to spot a spam message in your inbox based on spam it’s seen before, even if the characteristics of the incoming email weren’t originally coded into the filter—it’s learned what spam email is.

Deep learning is very similar, just more advanced and nuanced, and better at certain tasks, especially in computer vision—the “deep” bit means a whole lot more data, more layers, and smarter weighting. The most well-known example is being able to recognize what a dog looks like from a million pictures of dogs.

Plain old machine learning could do the same image recognition task, but it would take longer, need more manual coding, and not be as accurate, especially as the variety of images increased. With the help of today’s superpowered hardware, deep learning (a particular approach to machine learning, remember), is much better at the job.
Read more... 

Source: Gizmodo  

Differences Between AI, Machine Learning and Deep Learning | Times Square Chronicles

Check out this article by Times Square Chronicles about the Differences Between AI, Machine Learning and Deep Learning.

Times Square ChroniclesThanks to the advent of some pretty amazing technology, our devices are starting to get a lot smarter. Depending on where you live, you may have seen self-driving vehicles making test runs around your town, and if you have used an online help feature when placing an order, you may have interacted with a chatbot. Smartphones are getting wiser and a robot has been programmed to solve a Rubik’s cube. Sophisticated platforms like Qualcomm’s Artificial Intelligence platform — on top of giving users improved connectivity, reliability and security — enable this increase in intelligence, which can be labelled as machine learning, smart learning or artificial intelligence.   

To get a better sense of what these terms mean and how they are connected and different, check out the following:

 Artificial Intelligence
he best way to think of these three terms is to think of concentric circles with artificial intelligence— the concept that came first — as the largest circle, with machine learning, which came next, in the middle circle and then deep learning in the center...

Machine Learning Machine learning takes the concept of AI and expands on it a bit more. While AI relies on computer programming, machine learning involves uses complex algorithms to analyze a huge amount of data, glean patterns and then make a prediction — all without having a person program the device ahead of time...

Deep Learning
Just as machine learning is a subset of AI, deep learning is a subset of machine learning. Deep learning is a specific class of machine learning algorithms that use complex neural networks to take the idea of computer intelligence to a whole new level...

Source: Times Square Chronicles (press release)

Teaching machines to teach themselves | The Conversation - Machine learning

Photo: Arend Hintze
"For future machines to be as smart as we are, they'll need to be able to learn like we do" insist Arend Hintze, Assistant Professor of Integrative Biology & Computer Science and Engineering, Michigan State University.
How can computers learn to teach themselves new skills?
Photo: baza178/
Are you tired of telling machines what to do and what not to do? It’s a large part of regular people’s days – operating dishwashers, smartphones and cars. It’s an even bigger part of life for researchers like me, working on artificial intelligence and machine learning.

Much of this is even more boring than driving or talking to a virtual assistant. The most common way of teaching computers new skills – such as telling apart photos of dogs from ones of cats – involves a lot of human interaction or preparation. For instance, if a computer looks at a picture of a cat and labels it “dog,” we have to tell it that’s wrong.

But when that gets too cumbersome and tiring, it’s time to build computers that can teach themselves, and retain what they learn. My research team and I have taken a first step toward the sort of learning that people imagine the robots of the future will be capable of – learning by observation and experience, rather than needing to be directly told every little step of what to do. We expect future machines to be as smart as we are, so they’ll need to be able to learn like we do.

Setting robots free to learn on their own 
In the most basic methods of training computers, the machine can use only the information it has been specifically taught by engineers and programmers. For instance, when researchers want a machine to be able to classify images into different categories, such as telling apart cats and dogs, we first need some reference pictures of other cats and dogs to start with. We show these pictures to the machine, and when it guesses right we give positive feedback, and when it guesses wrong we apply negative feedback.

This method, called reinforcement learning, uses external feedback to teach the system to change its internal workings in order to guess better next time. This self-change involves identifying the factors that made the biggest differences in the algorithm’s decision, reinforcing accuracy and discouraging wrong decisions.

Another layer of advancement sets up another computer system to be the supervisor, rather than a human. This lets researchers create several dog-cat classifier machines, each with different attributes – perhaps some look more closely at color, while others look more closely at ear or nose shape – and evaluate how well they work. Each time each machine runs, it looks at a picture, makes a decision about what it sees and checks with the automated supervisor to get feedback.

Alternatively or in addition, we researchers turn off the classifier machines that don’t do as well, and introduce new changes to the ones that have done well so far. We repeat this many times, introducing small mutations into successive generations of classifier machines, slowly improving their abilities.

This is a digital form of Darwinian evolution – and it’s why this type of training is called a “genetic algorithm.” But even that requires a lot of human effort – and telling cats and dogs apart is an extremely simple task for a person.

Learning like people 
Our research is working toward a shift from a present in which machines learn simple tasks with human supervision, to a future in which they learn complicated processes on their own. This mirrors the development of human intelligence: As babies we were equipped with pain receptors that warned us about physical damage, and we had an instinct to cry when hungry or otherwise in need. 
Read more... 

Source: The Conversation

Artificial intelligence isn’t as clever as we think, but that doesn’t stop it being a threat | The Verge - Artificial Intelligence

Photo: James Vincent"A new report tries to bring order to the messy business of measuring AI progress" says James Vincent, cover machines with brains for The Verge, despite being a human without one.

Photo: Bryan Bedder / Getty Images for National Geographic
How clever is artificial intelligence, really? And how fast is it progressing? These are questions that keep politicians, economists, and AI researchers up at night. And answering them is crucial — not just to improve public understanding, but to help societies and governments figure out how to react to this technology in coming years.
A new report from experts at MIT, Stanford University, OpenAI, and other institutions seeks to bring some clarity to the debate — clarity, and a ton of graphs. The AI Index, as it’s called, was published this week, and begins by telling readers we’re essentially “flying blind” in our estimations of AI’s capacity. It goes on to make two main points: first, that the field of AI is more active than ever before, with minds and money pouring in at an incredible rate; and second, that although AI has overtaken humanity when it comes to performing a few very specific tasks, it’s still extremely limited in terms of general intelligence.
As Raymond Perrault, a researcher at SRI International who helped compile the report, told The New York Times: “The public thinks we know how to do far more than we do now.”To come to these conclusions, the AI Index looked at a number of measures of progress, including “volume of activity” and “technical performance.” The former stat examines how much everything happens in the field, from conference attendance to class enrollment, to VC investment and startups started. The short answer here is that everything’s happening a lot. In graph terms, it’s all “up and to the right.”
The other factor, “technical performance,” attempts to measure AI’s capabilities to outcompete humans at specific tasks, like recognizing objects in images and decoding speech. Here, the picture is more nuanced.
There are definitely tasks where AI has already matched or eclipsed human performance. These include identifying common objects in images (on a test database, ImageNet, humans get a 5 percent error rate; machines, 3 percent), and transcribing speech (as of 2017, a number of AI systems can transcribe audio with the same word error rate as a human). A number of games have also been definitively conquered, including Jeopardy, Atari titles like Pac-Man, and, most famously, Go. 
But as the report says, these metrics only give us a partial view of machine intelligence. For example, the clear-cut world of video games is not only easier to train AI in, because well-defined scoring systems help scientists assess and compare different approaches. It also limits what we can ask of these agents. In the games AI has “solved,” the computer can always see everything that’s happening — a quality known to scientists as “perfect information.” The same can’t be said of other tasks we might set AI on, like managing a city’s transport infrastructure. (Although researchers have begun to tackle video games that reflect these challenges, like Dota.)
Caveats of a similar nature are needed for tasks like audio transcription. AI may be just as accurate as humans when it comes to writing down recorded dialogue, but it can’t gauge sarcasm, identify jokes, or account for a million other pieces of cultural context that are crucial to understanding even the most casual conversation. The AI Index acknowledges this, and adds that a bigger problem here is that we don’t even have a good way to measure this sort of commonsense understanding. There’s no IQ test for computers, despite what some PR people claim. Read more...
Source: The Verge

51 Artificial Intelligence (AI) Predictions For 2018 | Forbes - Technology

Photo: Gil Press"51 predictions about AI becoming more practical and useful in 2018, automating some jobs and augmenting many others, combining machine learning and big data for fresh insights, with chatbots proliferating in the enterprise" reports Gil Press, writes about technology, entrepreneurs and innovation. 

Photo: Shutterstock
It is somewhat safe to predict that AI will continue to be at the top of the hype cycle in 2018. But the following 51 predictions also envision it becoming more practical and useful, automating some jobs and augmenting many others, combining machine learning and big data for fresh insights, with chatbots proliferating in the enterprise.

As the automotive industry undergoes massive disruption, incumbent OEMs and Tier 1s are becoming increasingly aware that they need to adopt AI immediately to address not only the external vehicle environment but to understand the in-cabin experience as well. Semi-autonomous and fully autonomous vehicles will require an AI-based computer vision solution to ensure safe driving, seamless handoffs to a human driver, and an enriched travel experience based on the emotional, cognitive and wellness of the occupants—Dr. Rana el Kaliouby, CEO and co-founder, Affectiva

In 2018, I expect we'll see a number of firsts, including AI systems which can explain themselves directly ('first person') instead of being externally assessed ('third person'); the erosion of net neutrality due to increasingly personalized and optimized AI-driven content delivery; and the burst of the deep learning bubble. AI startups who have simply applied AI in a particular domain will no longer receive over-inflated valuations. Those that survive will be offering a fundamental and demonstrable step forward in AI capability. We'll also see at least one more fatal accident involving autonomous vehicles on the roads, and a realization that human-level autonomous driving will require much longer to test and mature than current optimistic predictions—Monty Barlow, director of machine learning, Cambridge Consultants

AI will begin answering the question “Why?” Two things we’ve learned watching early adopters interact with AI systems over the last couple of years are: 1) Humans are not good at not knowing what an AI is doing, and 2) AI is not good at telling humans what it’s doing. This leaves users frustrated wondering “Why?” in the face of AI’s only current explanation: “Because I said so.” In 2018, it will no longer be enough for AI creators to shrug off users’ desire for more transparency by blaming the lack of communication on the fact that the machine is processing thousands of variables per second. In order to gain users’ trust that an AI system is working in pursuit of a shared goal, AI developers will begin prioritizing advanced forms of accountability, reporting and system queries that allow users to ask, “Why?”, in response to very specific actions—Or Shani, CEO, Albert
Read more... 

Source: Forbes 

Science and history meet art in trio of exhibits | Orlando Sentinel - Entertainment

Photo: Matthew J. Palm"The Cornell Fine Arts Museum in Winter Park has time on its mind - Space, too." summarizes Matthew J. Palm, Columnist-The Artistic-Type, Writer.

Tomas Saraceno, an Argentine artists based in Berlin, created "Cloud Cities -- Nebulous Thresholds" to hang under the glass dome of the conservatory at the Alfond Inn in Winter Park.
Photo: Matthew J. Palm/staff
Its latest exhibition, “Time as Landscape: Inquiries of Art and Science,” has proven too vast for the Cornell’s building at Rollins College. So the museum has curated an additional science-meets-art exhibit at the Orlando Science Center in Loch Haven Park.

The two disciplines have more in common than you might think. 

“Artists and scientists use the same skills,” said Jeff Stanford, vice president of marketing for the Orlando Science Center, in the park north of downtown. “They both ask big questions, theorize and experiment to get to their final results.” The venue is exhibiting “Steady Observation,” one of the first shows at its up-and-coming gallery.

Representatives of both institutions hope the companion exhibits illustrate how art is a critical component of education, an argument made by proponents of STEAM — the philosophy that stresses the use of science, technology, engineering, the arts and math to guide student learning and critical thinking.

“The artists included in the exhibition desire to understand, question and describe the subject of time,” many with a scientific viewpoint, wrote Abigail Ross Goodman in the exhibition catalog. She and Amy Galpin of the Cornell curated the show.

With the Cornell’s space taken over by science-based works, its permanent collection was available to travel.

“We wanted some of the historic collection to be on view in the community when the whole museum here is dedicated to contemporary art,” said Cornell director Ena Heller.

More than 40 works landed across the street from the Science Center — at the Mennello Museum of American Art.

Galpin curated “Time and Thought” for the Mennello, using the Cornell collection to highlight important ideas and events in American history.

Source: Orlando Sentinel

Boost Productivity in the Workplace via E-Learning | - Community

Photo: Christian WilliamsChristian Williams, Sales, Marketing and Business developer in one of the leading firm in the field in London reports, "A managerial position in an organization is a corporate level many people aspire to reach. However, this position of leadership comes with many responsibilities." 

Photo: Pexels
In many cases, managers a tasked with leading a team of employees within the organization. In organizations, teams are comprised of some employees with professional abilities; their goal is to combine their professional skills towards achieving the organization's goals. The effectiveness of these teams depends on the ability of the manager to help them become more productive by working smarter with the right tools. In the following part of this article, I will discuss five ways managers can help employees to become more productive.

Set Clear Goals and Objectives
Setting the right goals will help the employees focus on their abilities on productive activities aimed at achieving their goals. With the right goals in place, it is easier for managers to encourage collaboration among and between teams in the organization. The manager aims to establish good communication and to promote knowledge sharing among the team members who are working together towards achieving common goals.

Propagate the Importance of Achieving Goals
When your employees fully understand the importance of the company goals and the expected benefits that come with the achievement of these goals, they will understand why the goals must be achieved. For example, US Real Estate firm Hudson & Marshall has created a detailed plan which guides the employees through the different phases of a project and it helps them to become more productive. You should also never overlook the importance of providing high-quality e-learning tools for your employees. Working with smart tools will help them perform their official duties better.

Proper Delegation of Duties
 The essence of building a team is to encourage the selected professionals to combine their proficiencies towards achieving the organization's goals. This will be accomplished by delegating duties to your employees who acknowledge the importance of playing their part. Delegating responsibilities improves the mindset of employees; they will be encouraged that you have confidence in their abilities to deliver on their assigned tasks. It also shares the workload thus reducing stress and frustration in the workplace.

Source: (blog) 

Innovative School: Project-based learning at the American College of Sofia | The Sofia Globe

"No more dead parrot-fashion learning. No more rote learning by passive pupils. Let us enter the age of Project Based Learning – and see why the American College of Sofia officially has been given the title of an “Innovative School”"according to The Sofia Globe staff.
Photo: The Sofia Globe staff

The American College of Sofia follows project based learning, an effective, dynamic and enjoyable way for pupils to acquire and retain the knowledge they need.
It’s an approach based on the idea that students acquire a deeper knowledge through active exploration of real-world challenges and problems. Let’s translate that into English, or shall we say, science.
Today we are visiting three classes at the American College of Sofia’s science department, the grade 11 and 12 chemistry and physics profiles.
In the well-equipped physics lab, science department head Krasimira Chakarova and her colleague Vanya Angelova are interacting with pupils who are working in separate teams to go through a stage of experiments they are conducting.
The pupils were given their tasks the previous week, and before beginning the practical stage, had to come up with three theoretical sources on which to base their experiments.
Here, a laser is being used in an experiment involving condensation; there, a meniscus optika and at another table, a team is seeking to establish why it is that clothes go dark when they’re wet.
That’s the thing about project based learning, PBL to initiates. Pupils are learning to use the scientific method to answer a research question, and in the process, also improve their communication skills by writing a scientific article, giving an oral presentation and engaging in debate to defend their findings.
In the case of these science experiments, they have learnt to use the devices, carry out observations, and must keep a journal to record all of these observations, which will be submitted to the teacher for discussion and feedback.
This process, of which today’s session is part, began around the start of the second week of November and will continue to third week of January, with a final research paper submitted and then the engagement, in the presentation, with opponents and reviewers.
This would be a good place, by the way, to mention that they rather seem to be enjoying themselves.Read more...
Source: The Sofia Globe

Robot learning improves student engagement | EurekAlert - Education

"The first-ever study of Michigan State University's pioneering robot-learning course shows that online students who use the innovative robots feel more engaged and connected to the instructor and students in the classroom"

Robot-learning could be the wave of the future for online classes.
Photo: Michigan State University
Stationed around the class, each robot has a mounted video screen controlled by the remote user that lets the student pan around the room to see and talk with the instructor and fellow students participating in-person.

The study, published in Online Learning, found that robot learning generally benefits remote students more than traditional videoconferencing, in which multiple students are displayed on a single screen.
Christine Greenhow, MSU associate professor of educational psychology and educational technology, said that instead of looking at a screen full of faces as she does with traditional videoconferencing, she can look a robot-learner in the eye - at least digitally. 

"...students participating with the robots felt much more engaged and interactive with the instructor and their classmates who were on campus."  
Read more... 

Original Source
Source: EurekAlert (press release) 

This site lets you take Harvard’s most popular computer science class and more courses from top universities for free | Business Insider - Insider Picks

The Insider Picks team writes about stuff we think you'll like. Business Insider has affiliate partnerships, so we get a share of the revenue from your purchase.
Photo: Connie Chen
Connie Chen, reporter on the Insider Picks team notes, "Online learning has made education more accessible than ever, but few platforms make getting a quality education easier or more affordable than the non-profit edX."

One of Harvard's most popular courses, Introduction to Computer Science, is available on edX for free.
edX is a massive open online course (MOOC) provider founded by MIT and Harvard in 2012. By partnering with more than 90 of the world's leading universities, non-profits, NGOs, and corporations, it's able to offer free, high-quality courses across a large range of subjects.
edX's mission is to:
  • Increase access to high-quality education for everyone, everywhere
  • Enhance teaching and learning on campus and online
  • Advance teaching and learning through research
While you can learn anything from programming in java to the science of happiness for free, you can also pay a fee to receive a certificate of completion from each course. Other special programs include the Professional Certificate and MicroMasters Certificate, which are designed to provide specialized training and career advancement opportunities. 

If you're interested in learning more about edX and how it works, keep reading. 

Ivy Leagues, top international universities, small liberal arts colleges, and music and performing arts schools are all represented in edX's impressive roster of institutions.  

Jannis Tobias Werner
Photo: Shutterstock
Here are just a handful of names you'll recognize: 

  • MIT
  • Harvard
  • Caltech 
  • Cornell
  • Wellesley
  • Julliard
  • Imperial College London
  • The Hong Kong University of Science and Technology
  • University of Oxford
  • Tsinghua University
  • University of Edinburgh 
See a full list of schools and organizations here. You can click on each school to learn more about it and see which edX courses it offers. 

Source: Business Insider

Employers Want Machine Learning & Big Data Analytics Skills—This Online MBA Has It Covered | BusinessBecause - MBA Careers

Photo: Robert Klecha
"The UK’s Nottingham Trent University has incorporated cutting-edge classes on machine learning into its Online MBA with Data Analytics" inform Robert Klecha, reporter/Screenwriter at BusinessBecause.

Photo: BusinessBecause
Big data is perhaps the buzzword of 21st century tech. Big data is predicted to become the base of business competition in the future as machines have access to ever more data, and algorithms replace human decision-making.

At the same time, the US alone faces a shortage of up to 190,000 people with expert data analytics skills, as well as 1.5 million data-savvy managers and analysts, according to a McKinsey Global Institute (MGI) report. The demand for professionals with big data analytics skills is clear.

With this in mind, the UK’s Nottingham Trent University has incorporated a Practical Machine Learning Methods for Data Mining course, into its Online MBA with Data Analytics, to offer MBA students real insight into the world of big data.

Digitalization across industries encourages huge data capture and understanding how to analyze this information is vital to success, but there are still various issues holding back progress. The US healthcare industry, for example, was predicted in 2011 to be able to create $300 billion in value every year if the sector could use big data creatively. In 2016 however, MGI found that only 10-to-20% of the opportunities had been realized. A lack of talent, process and organizational change, incentives and regulations were blamed for the slow development.

This is precisely the kind of area that NTU is focusing on. As part of their machine learning specialized module, students are taught how to interpret diverse diagnostic information stored by hospital systems and biometric data to help inform patient care and drug development. 

Courses like Nottingham Trent University’s Online MBA with Data Analytics have never been so relevant. And with the demand for these skills outstripping supply, they’re only going to increase in value.
Read more... 

Recommended Reading

Luca (Left) graduated with an MBA from Copenhagen Business School in 2017.Copenhagen MBA Starts New Career At Denmark’s Leading Renewable Energy Company by Thomas Nugent, Journalist at BusinessBecause.

"Luca Piccardi came across Ørsted—formerly DONG Energy—after a company visit during his MBA at Copenhagen Business School. Now, he works there."

Source: BusinessBecause

What technology trends you can expect in education in 2018 | - Opinion

"The increasing influence of technology in education is offering us a glimpse into a gradually evolving realm of unconstrained learning" continues

Technology’s transformative nature is like that of an excited particle that alters the status quo of any physical space it enters. A little glimpse into the dynamic digital world is indicative of how technology has given a whole new meaning to education. It has changed everything in our lives – what we thought we knew, what we were accustomed to – and has helped us explore new dimensions to achieve our objectives more effectively and efficiently. This is also true in the field of education, where it has twisted the very fabric of traditional learning and extended to us new and more evolved learning methodologies.

The rapid increase in internet connectivity has been an important catalyst for the growth of e-learning. A proliferation of edtech platforms in the Indian landscape is slowly eliminating barriers in the access to quality education. With 400 million K-12 students, upcoming startups can expect to effectively mobilise their content by creating micro-learning facilities for self-learning, while, simultaneously, engendering employment opportunities. According to a recent report by Google, KPMG, online education in India will see approximately 8x growth in the next five years, says a recent report by Google, KPMG. This will have a significant impact on the edtech market that has a potential to touch $1.96 billion by 2021.

Education with the help of technology has crossed borders and has opened up a world of opportunities for students. From easy sharing of information to collaboration with the help of email and cloud applications to instant access to learning programmes anytime, anywhere, here is how technology will alter the education sector in 2018:

Virtual reality and gamification
Do you know that passive teaching methods lead to a concept retention rate of less than 30 percent? On the other hand, participatory techniques generate retention rates of up to 90 percent. Where our traditional education system fails is that we remain largely focussed on outworn practices that keep student engagement passive, and retention at the bare minimum.

The aforementioned issue is being actively addressed and tackled by technology. Augmented reality, virtual reality, and gamification are giving students an immersive, first-hand experience through graphical simulation, and, thereby, extending the concept of experiential learning. This has the effect of boosting both engagement and retention, while the use of animation ensures that students understand complicated theories easily. Such technologies are more likely to be a game-changer in time, with visible developments taking place from 2018...

Machine learning and artificial intelligence 
People might ask how artificial intelligence actually affects a student’s learning capabilities and boosts human intelligence. Well, AI is turning pedagogical training right on its head. AI-driven algorithms create behavioural models by studying individual data sets. Based on these models, the algorithms develop a deeper understanding of a student’s strengths and weaknesses and devise a unique personalised learning curve...

... if we are able to deliver despite an outmoded education system, imagine what wonders the next generation will accomplish, once it has been trained with advanced pedagogical methods. 


Elementary School Students from Champions' Learn to Code | PR Newswire

Elementary school students at Champions, KinderCare Education's before- and after-school program, will learn how to write computer programming code next week, December 4-10, as part of the national program Hour of Code.

Students their badges for
The Hour of Code is a national effort led by to demystify computer science. By showing that anyone can learn basic computer coding, organizers hope to encourage more participation in computer science. The Hour of Code takes place each year during Computer Science Education Week. More than 100 industry partners, including Google, MSN, Yahoo!, Disney, and Apple, are joining this year's Hour of Code. 

Champions students in participating programs will use the Hour of Code to help break down stereotypes about computer science and to introduce children to a universal language that is the new basic literacy of our digital age. Coding facilitates efficient problem-solving skills and encourages children to think creatively and innovatively. (And it's fun – watch 
Champions students program a robot in this video.) 

"How we converse and get our work done in the future will be different than it is today," said Gretchen Yeager, Champions Director of Quality and Accreditation. "It is critical that we prepare future generations to think about our world in a new way." 

About Champions® Before- and After-School Programs

At more than 450 sites in 19 states and Washington D.C. Champions provides families a safe, convenient place for children to learn and have fun before- and after-school. As a leading provider of out-of-school-time learning and education programs, Champions offers parents peace of mind, and administrators a dedicated partner in delivering high-quality education. 
For more information, visit

About KinderCare Education®

KinderCare Education is an experience-based provider of early education and child care with more than 30,000 teachers and staff serving 170,000 families every day, where they need us:
  • In neighborhoods with our KinderCare® Learning Centers that offer early childhood education and child care for children six weeks to 12 years old
  • At work through KinderCare Education at Work™, family-focused benefits for employers including on-site and near-site early learning centers and back-up care for last-minute child care
  • In local schools with our Champions® before and after-school programs.
KinderCare Education operates 1,400 early learning centers, more than 470 Champions sites, and is supported by a corporate team of nearly 500 headquarters employees based in Portland, Oregon. In 2017, KinderCare Education received a Gallup Great Workplace Award – one of only 37 companies worldwide to do so. 
To learn more, visit  

Source: PR Newswire (press release)

Criminals look to machine-learning to mount cyber attacks | SC Magazine UK

"Artificial intelligence will increasingly be used by hackers to create new forms of attack" says Rene Millman, SC Magazine UK.

Cyber-criminals will use artificial intelligence and machine learning to outwit IT security and mount new forms of cyber-attacks, according to predictions made by McAfee.

Speaking at the launch of the IT security company's threats predictions report, launched at its MPower conference held in Amsterdam, McAfee chief scientist Raj Samani said in an interview that criminals will increasingly use machine learning to create attacks, experiment with combinations of machine learning and artificial intelligence (AI), and expand their efforts to discover and disrupt the machine learning models used by defender.

He said that machine learning will “help criminals to speak in a native language when carrying out a phishing attack”. This would improve their social engineering—making phishing attacks more difficult to recognise.

In response, those charged with defending IT infrastructure will need to combine machine learning, AI, and game theory to probe for vulnerabilities in both software and the systems they protect, to plug holes before criminals can exploit them.

Samani also predicted that ransomware will evolve from its main purpose of extortion to something different.

“The growth of ransomware has been much discussed. But in reality, it has blended and morphed into something else. Threat vectors can be a smoke screen. Ransomware [in some attacks] was used to distract the IT department. What we see is a growth of pseudo-ransomware.”

He added that whatever the attack may be, “we'll always be able to tell the motivation, but not immediately”. This distraction attack will be done in much the same way as DDoS attacks have been used to obscure other real aspects of attacks. These could be “spectacular” proof-of-concept with the aim of engaging large organisation with mega-extortion demands in future.
Read more... 

Source: SC Magazine UK

The (virtual) reality of training | Offshore Technology

Lloyd’s Register has developed a Virtual Reality (VR) Safety Simulator to help support training and knowledge transfer in the energy industry and illustrate the need for a continued focus on safety and risk assessment. Patrick Kingsland spoke to LR’s VP of Marketing and Communications, Peter Richards and Global Academy Training Manager, Luis De La Fuente, about how it works.

With VR we can bring some of that hands-on practical experience into the classroom.
Photo: Offshore Technology
Patrick Kingsland: What are the key challenges the offshore industry faces today in terms of safety training? 

Luis De La Fuente: The oil and gas industry is very cyclical. During downturns you have skilled people leaving and companies can be left with a knowledge gap with only a handful of experienced people. Challenging times often result in less training as training budgets get reduced or are transferred elsewhere. During boom periods you often get a sudden influx of new talent who are often less experienced. And some of the more skilled personnel who were let go during the downturn leave the industry and decide not to come back. This means you lose quite a bit of hands-on knowledge and experience. For us, the whole goal is to work out how to reduce that learning gap, and get a person to a competent, senior level in as short amount of time as possible.

PK: How could virtual reality help reduce that learning gap in your opinion? 

LF: Well one good way is to introduce as much theory as possible so that when staff get onto the job sites they are familiar with what they are doing. But you still have a gap there between theory and practice. With VR we can bring some of that hands-on practical experience into the classroom without necessarily having to put guys out there where it’s more dangerous.

Peter Richards: It’s about making the training environment more immersive so that you can really start to understand the potential hazards you are going to be exposed to. VR also provokes a reaction from individuals on the implication their actions will have in an offshore environment. And because it’s interactive people are coming away from our VR experience and actually talking about safety training in a positive way. Bear in mind this is not a subject that generally generates that level of enthusiasm.

PK: How has VR technology improved over the past few years and where did you look for inspiration? 

PR: I think we’re now at a stage where VR is at a tipping point. The technology has got a lot better and the cost has come down. It’s more viable to look at it as a realistic mainstream application as opposed to where I think we were four years ago when everybody was talking about virtual reality but it was really just an overgrown 3D video.

During this gestation period we were closely following the technology to see how it was developing. In conjunction with a number of agencies we looked at gaming technology in particular. The hardware we are now using deploys handsets as well as head-sets. This gives us the ability to interact more with the environment as opposed to the initial development of VR which was just a headset.
Read more... 

Source: Offshore Technology


Rss feeds