Conway’s Law

Conway’s law is an adage named after computer programmer Melvin Conway, who introduced the idea in 1967;[1] it was first dubbed Conway’s law by participants at the 1968 National Symposium on Modular Programming.[2] It states that

organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations

— M. Conway[3]

The law is based on the reasoning that in order for a software module to function, multiple authors must communicate frequently with each other. Therefore, the software interface structure of a system will reflect the social boundaries of the organization(s) that produced it, across which communication is more difficult. Conway’s law was intended as a valid sociological observation, although sometimes it’s taken in a humorous context.

Aha! Study examines people as they are struck by sudden insight

Everybody loves those rare “aha moments” where you suddenly and unexpectedly solve a difficult problem or understand something that had previously perplexed you.

But until now, researchers had not had a good way to study how people actually experienced what is called “epiphany learning.”

In new research, scientists at The Ohio State University used eye-tracking and pupil dilation technology to see what happens as people figured out how to win a strategy game on a computer.

“We could see our study participants figuring out the solution through their eye movements as they considered their options,” said Ian Krajbich, co-author of the study and assistant professor of psychology and economics at Ohio State.

“We could predict they were about to have an epiphany before they even knew it was coming.”

Krajbich conducted the study with James Wei Chen, a doctoral student in economics at Ohio State. Their results were published this week in the Proceedings of the National Academy of Sciences.

Most decision-making research has focused on reinforcement learning, where people gradually adjust their behavior in response to what they learn, Chen said.

“Our work is novel in that we’re looking at this other kind of learning that really has been neglected in past research,” he said.

For the study, 59 students played a game on a computer against an unseen opponent. On the screen were 11 numbers (0 to 10) arranged in a circle (like a rotary phone, for those old enough to remember). The students chose one number and then their opponent chose a number. The details of how they won are somewhat complex (it had to be complex for them to have something to figure out), but essentially the optimal game strategy boils down to picking the lower number. Therefore, picking zero was always the best choice.

The participants played 30 times in a row, always against a new opponent. The researchers created an incentive to win by awarding small payments for each victory.

An eye-tracker sitting under the computer screen could tell what numbers they were looking at as they considered their options during parts of the experiment.

After each of the trials, participants had the option of committing to playing one number for the rest of the trials. They were encouraged to do so by the promise of an extra payment. Participants were then reminded what number they chose, what number their opponent had chosen, and whether they had won or lost.

The goal for the researchers was to see when players had that epiphany, that “aha moment,” in which they realized that zero was always the best choice and then committed to playing that number for the rest of the experiment.

The results showed that about 42 percent of players had an epiphany at some point and committed to playing zero. Another 37 percent committed to a number other that zero, suggesting they didn’t learn the right lesson. The remaining 20 percent never committed to a number.

The researchers could tell when a player had an epiphany.

“There’s a sudden change in their behavior. They are choosing other numbers and then all of a sudden they switch to choosing only zero,” Krajbich said. “That’s a hallmark of epiphany learning.”

These participants gave clues that they were about to have that aha moment, even if they didn’t realize it. The eye-tracker showed they looked at zero and other low numbers more often than others did in the trials just before their epiphany, even if they ended up choosing other numbers.

“We don’t see the epiphany in their choice of numbers, but we see it in their eyes,” Chen said. “Their attention is drawn to zero and they start testing it more and more.”

Those who had the epiphanies also spent less time looking at their opponents’ number choices and more time considering the result of each trial – whether they won or lost. The researchers said this suggests they were learning that their choice of a low number was the key to victory.

A key to epiphany learning is that it comes suddenly, which was evident when the researchers looked at eye-tracking results on the commitment screen. This was the screen where could choose to commit to zero (or another number) for the rest of the trials.

“Those who showed epiphany learning weren’t building up confidence over time. There was no increase in the amount of time they looked at the ‘commit’ button as they went through the trials, which would have indicated they were considering committing,” Krajbich said.

“They weren’t paying a lot of attention to the commit button until the moment they decided to commit,” Chen added.

Findings on pupil dilation provided additional evidence that epiphany learners were reacting differently than others.

“When your pupil dilates, we see that as evidence that you’re paying close attention and learning,” Krajbich said. Results showed those who experienced epiphany learning experienced significant pupil dilation while viewing the feedback screen (telling them whether they won or lost) before they made the commit decision. The dilation disappeared after they committed.

“They were showing signs of learning before they made the commitment to zero,” Krajbich said. “We didn’t see the same results for others.”

These results suggest that you have to look within to truly experience epiphany learning.

“One thing we can take away from this research is that it is better to think about a problem than to simply follow others,” Krajbich said.

“Those who paid more attention to their opponents tended to learn the wrong lesson.”

Original article here.



In a world filled with ever-more-complex technological, sociological, ecological, political & economic systems… a tool to make interactive simulations may not be that much help. But it can certainly try.

Loopy is a System Simulation and Visualization building routine by Nicky Case.

Play with simulations here.


How Does Bitcoin Work? (video)

We have all heard of Bitcoin.  This video gives a more technical explanation of how Bitcoin works. Want more? Check out my new in-depth course on the latest in Bitcoin, Blockchain, and a survey of the most exciting projects coming out (Ethereum, etc):…
Lots of demos on how to buy, send, store (hardware, paper wallet). how to use javascript to send bitcoin. How to create Ethereum Smart Contract, much more.

Shorter 5 min introduction:…

Written version:…

My Bitcoin address: 13v8NB9ScRa21JDi86GmnZ5d8Z4CjhZMEd

Arabic translation by Ahmad Alloush

Spanish caption translation by Borja Rodrigo,, DFJWgXdBCoQqo4noF4fyVhVp8R6V62XdJx

Russian caption translation by Alexandra Miklyukova

Original article here.


Pioneering chip extends sensors’ battery life (video)

A low-cost chip that enables batteries in sensors to last longer, in some cases by over ten times, has been developed by engineers from the University of Bristol.

Dr Bernard Stark and colleagues in the Bristol Electrical Energy Management Research Group based in the Merchant Venturers School of Engineering, have developed a voltage detector chip that requires only a few trillionths of a watt (picowatts) to activate other circuits. The research group are providing samples of their chip to companies to use, which will enable engineers to design sensors that continuously listen, without using from a battery or mains.

The result is smaller batteries, or a battery life that is extended, in some cases by years. The voltage detector can also eliminate , for example the team have demonstrated a TV with no continuous draw of power during standby, by using a voltage detector that is powered up at a distance, directly from the infrared signal of a standard TV controller.

The patent pending UB20M voltage detector, or keep-alive device, is a chip that, when combined with a suitable sensor, eliminates standby power by enabling zero-power sensing and listening. It allows circuit designers to develop circuits that perform continuous monitoring without using battery power, and to implement wireless wake-up with zero receiver power. The chip is a sensor-driven circuit that requires no , instead it uses a fraction of the power contained in the output signal of the sensor.

Here is an overview video:


Original article here.


The Map of Mathematics (video)

The entire field of mathematics summarised in a single map! This shows how pure mathematics and applied mathematics relate to each other and all of the sub-topics they are made from.

If you would like to buy a poster of this map, they are available here:…

I have also made a version available for educational use which you can find here:…

To err is to human, and I human a lot. I always try my best to be as correct as possible, but unfortunately I make mistakes. This is the errata where I correct my silly mistakes. My goal is to one day do a video with no errors!

1. The number one is not a prime number. The definition of a prime number is a number can be divided evenly only by 1, or itself. And it must be a whole number GREATER than 1. (This last bit is the bit I forgot).

2. In the trigonometry section I drew cos(theta) = opposite / adjacent. This is the kind of thing you learn in high school and guess what. I got it wrong! Dummy. It should be cos(theta) = adjacent / hypotenuse.

3. My drawing of dice is slightly wrong. Most dice have their opposite sides adding up to 7, so when I drew 3 and 4 next to each other that is incorrect.

Thanks so much to my supporters on Patreon. I hope to make money from my videos one day, but I’m not there yet! If you enjoy my videos and would like to help me make more this is the best way and I appreciate it very much.

Here are links to some of the sources I used in this video.

Summary of mathematics:…
Earliest human counting:…
First use of zero:…
First use of negative numbers:…
Renaissance science:…
History of complex numbers:…
Proof that pi is irrational:…

Also, if you enjoyed this video, you will probably like my science books, available in all good books shops around the work and is printed in 16 languages. Links are below or just search for Professor Astro Cat. They are fun children’s books aimed at the age range 7-12. But they are also a hit with adults who want good explanations of science. The books have won awards and the app won a Webby.

Frontiers of Space:…
Atomic Adventure:…
Intergalactic Activity Book:…
Solar System App:…

Find me on twitter, instagram, and my website:…

Link to original YouTube video.

Everything you need to know about beer, in one chart (infographic)

There are dozens upon dozens of different styles of beer out there, from pale ales to stouts to bocks — and those are just a few.

Being that there are so many styles, and so many exceptions to the rules, it’s incredibly difficult (not to mention time-consuming) to get to know them all, but knowing your favorites will make drinking them a lot more enjoyable.

We’ve created a taxonomy of most major beer styles to help you put your favorite cold ones into context.

Original article here.

GE’s wants you to do some science with Labracadabra

If you’re looking for something to occupy your teen’s time, you could do worse than GE’s Labracadabra. These are a family of mind-expanding science sets which start at the allowance-friendly price of $29.99.

At launch, there are six on offer, and they all demonstrate an important scientific concept, like exothermic reactions and sound waves. And yes. There’s also one that sees you create a ‘volcano’ using household items.

Guiding you through each experiment is a special Labracadabra Alexa skill for the Amazon Echo. You can think of this as a science teacher in your front-room.

You can order a Labracadabra from today. It’s clearly a far better option than watching endless repeats of Bad Santa.

Original article here.

Traces of the Sun (video)

Explanation: This year the December Solstice is today, December 21, at 10:44 UT, the first day of winter in the north and summer in the south. To celebrate, watch this amazing timelapse video tracing the Sun’s apparent movement over an entire year from Hungary. During the year, a fixed video camera captured an image every minute. In total, 116,000 exposures follow the Sun’s position across the field of view, starting from the 2015 June 21 solstice through the 2016 June 20 solstice. The intervening 2015 December 22 solstice is at the bottom of the frame. The timelapse sequences constructed show the Sun’s movement over one day to begin with, followed by traces of the Sun’s position during the days of one year, solstice to solstice. Gaps in the daily curves are due to cloud cover. The video ends with stunning animation sequences of analemmas, those figure-8 curves you get by photographing the Sun at the same time each day throughout a year, stepping across planet Earth’s sky.

Original article here.


Google’s AI created its own universal ‘language’

The technology used in Google Translate can identify hidden material between languages to create what’s known as interlingua.

Google has previously taught its artificial intelligence to play games, and it’s even capable of creating its own encryption. Now, its language translation tool has used machine learning to create a ‘language’ all of its own.

In September, the search giant turned on its Google Neural Machine Translation (GNMT) system to help it automatically improve how it translates languages. The machine learning system analyses and makes sense of languages by looking at entire sentences – rather than individual phrases or words.

Following several months of testing, the researchers behind the AIhave seen it be able to blindly translate languages even if it’s never studied one of the languages involved in the translation. “An example of this would be translations between Korean and Japanese where Korean⇄Japanese examples were not shown to the system,” the Mike Schuster, from Google Brain wrote in a blogpost.

The team said the system was able to make “reasonable” translations of the languages it had not been taught to translate. In one instance, a research paper published alongside the blog, says the AI was taught Portuguese→English and English→Spanish. It was then able to make translations between Portuguese→Spanish.

“To our knowledge, this is the first demonstration of true multilingual zero-shot translation,” the paper explains. To make the system more accurate, the computer scientists then added additional data to the system about the languages.

However, the most remarkable feat of the research paper isn’t that an AI can learn to translate languages without being shown examples of them first; it was the fact it used this skill to create its own ‘language’. “Visual interpretation of the results shows that these models learn a form of interlingua representation for the multilingual model between all involved language pairs,” the researchers wrote in the paper.

An interlingua is a type of artificial language, which is used to fulfil a purpose. In this case, the interlingua was used within the AI to explain how unseen material could be translated.

“Using a 3-dimensional representation of internal network data, we were able to take a peek into the system as it translated a set of sentences between all possible pairs of the Japanese, Korean, and English languages,” the team’s blogpost continued. The datawithin the network allowed the team to interpret that the neural network was “encoding something” about the semantics of a sentence rather than comparing phrase-to-phrase translations.

“We interpret this as a sign of existence of an interlingua in the network,” the team said. As a result of the work, the Multilingual Google Neural Machine Translation is now being used across all of Google Translate and the firm said multilingual systems are involved in the translation of 10 of the 16 newest language pairs.

The research from the Google Brain team follows its recent work that taught AI to create a form of encryption. In a research paper published online, the scientists created three neural networks: Alice, Bob, and Eve. Each of the networks was given its own job. One to create encryption, one to receive it and decode a message, and the final to attempt to decrypt the message without having encryption keys.

After training, the AIs were able to convert plain text messages into encrypted messages, using its own form of encryption and then decode the messages.

Original article here.

Tags: , Next Page »« Previous Page

Leave a Reply

class="post-222 post type-post status-publish format-standard has-post-thumbnail hentry category-computer-science tag-adage tag-law"