"Design Is [Smart]": https://design.google/library/design-is-smart/ Talk by Jess Holbrook and Josh Lovejoy demystify and explain human-centered machine learning. #ML
"Design Is [Smart]": https://design.google/library/design-is-smart/ Talk by Jess Holbrook and Josh Lovejoy demystify and explain human-centered machine learning. #ML
Line-us: The little #Robot drawing arm: https://www.line-us.com/
Jeff Bezos taking his new dog for a walk at the MARS2018 conference. #Robot
"Fractal AI: A Fragile Theory of Intelligence" : #ML
https://github.com/FragileTheory/FractalAI/blob/master/README.md
OMG #Emotion Challenge:
https://www2.informatik.uni-hamburg.de/wtm/OMG-EmotionChallenge/
The OMG-Emotion Behavior Dataset:
https://arxiv.org/pdf/1803.05434.pdf
"Amazon deforestation is close to tipping point": https://www.sciencedaily.com/releases/2018/03/180319124212.htm
"Deforestation of the Amazon is about to reach a threshold beyond which the region's rainforest may undergo irreversible changes that transform the landscape into degraded savanna with sparse shrubby plant cover and low biodiversity. Deforestation rates ranging from 20% to 25% could turn the hydrological cycle unable to support its ecosystem."
"Meat Makers: the artificial beef revolution" - 20min documentary by The Economist:
Presentation by John Frazer on "Computational Design":
About the talk: In the talk Frazer describes the monumental problems we face globally that architecture could address, the power computational design possesses, and the tragedy that the later is not employed to address the former. The computational compression of space and time, virtual prototyping, and direct control over robotic fabrication all have the potential of massive global issues. Frazer describes the robust capabilities of cellular automata, genetic algorithms, and evolutionary algorithms.
About the speaker: John Frazer is the godfather of algorithmic and evolutionary design in architecture. Frazer taught at the Architectural Association in London, Cambridge University, and the University of Ulster. He is the former head of the School of Design at Hong Kong Polytechnic and the Queensland University of Technology.
Read more: http://www.interactivearchitecture.org/the-generator-project.html
"According to a not-at-all recent report by Keeper, there’s a 50/50 chance that any user account can be accessed with one of the 25 most common passwords. And there’s a 17% chance that the password is 123456."
https://hackernoon.com/picking-the-low-hanging-passwords-b64684fe2c7 #InfoSec
Norbert Wiener talked about the three bombs. He was prescient about the "data bomb".
"I agree with what Einstein used to say about the three bombs: there are three bombs. The first one is the atomic bomb, which disintegrates reality. The second one is the digital or the computer bomb, which destroys the principle of reality itself - not the actual object, and rebuilds it. And finally the third bomb is the demographic one." - Norbert Wiener
The interactive evolutionary algorithm in Nintendo Wii Mii Creator:
#ML #Evolution #Generative #HCI
"The Nintendo Wii Mii Creator application works either by manual editing of face and body features, or by an interactive evolutionary algorithm (Takagi, 2001, "Interactive Evolutionary Computation: Fusion of the Capabilities of {EC} Optimization and Human Evaluation"; Dawkins, 1986, "The Blind Watchmaker)), shown here. The evolutionary algorithm is accessed by choosing "Start from a lookalike". The user is presented with a large random population of faces, and chooses a favourite from them. A new (smaller) population of faces is created by the system, by mutating the current face (random changes to the face's features). Then the user chooses again, and this process loops. Gradually the user explores "face space" (Caldwell and Johnston, 1991, "Tracking a criminal suspect through face-space with a genetic algorithm") and hopefully finds the desired face."
"Gloomy Sunday" - Latest addition to Memo Akten’s ‘Learning to See’ project uses a neural network based realtime image translation model, to generate visual poetry.
“For instance, on the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much—the wheel, New York, wars and so on—whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man—for precisely the same reasons.”
― Douglas Adams, The Hitchhiker's Guide to the Galaxy
"Tracking all members of a honey bee colony over their lifetime": https://arxiv.org/abs/1802.03192v2
Yet another idea for the #FFHCI (flora-fauna-human-computer-interaction) toolkit.
"Modelling Affect for Horror Soundscapes": #ML #Generative #Music #Emotion http://antoniosliapis.com/papers/modelling_affect_for_horror_soundscapes.pdf
Abstract: "The feeling of horror within movies or games relies on the audience’s perception of a tense atmosphere — often achieved through sound accompanied by the on-screen drama — guiding its emotional experience throughout the scene or game-play sequence. These progressions are often crafted through an a priori knowledge of how a scene or game-play sequence will playout, and the intended emotional patterns a game director wants to transmit. The appropriate design of sound becomes even more challenging once the scenery and the general context is autonomously generated by an algorithm. Towards realizing sound-based affective interaction in games this paper explores the creation of computational models capable of ranking short audio pieces based on crowdsourced annotations of tension, arousal and valence. Affect models are trained via preference learning on over a thousand annotations with the use of support vector machines, whose inputs are low-level features extracted from the audio assets of a comprehensive sound library. The models constructed in this work are able to predict the tension, arousal and valence elicited by sound, respectively, with an accuracy of approximately 65%, 66% and 72%."