selfreported ‘science’ proves alcohol changes mood

selfreported ‘science’ proves alcohol changes mood

In this ‘study’ alcohol moodiness seems to ‘prove’ that the sort of alcohol ingested (liquor, wine, beer) influences the state of mind of the person.

quote:

” Main outcome measures Positive and negative emotions associated with consumption of different alcoholic beverages (energised, relaxed, sexy, confident, tired, aggressive, ill, restless and tearful) over the past 12 months in different settings.

Results Alcoholic beverages vary in the types of emotions individuals report they elicit, with spirits more frequently eliciting emotional changes of all types. Overall 29.8% of respondents reported feeling aggressive when drinking spirits, compared with only 7.1% when drinking red wine (p<0.001). Women more frequently reported feeling all emotions when drinking alcohol, apart from feelings of aggression. Respondents’ level of alcohol dependency was strongly associated with feeling all emotions, with the likelihood of aggression being significantly higher in possible dependent versus low risk drinkers (adjusted OR 6.4; 95% CI 5.79 to 7.09; p<0.001). The odds of feeling the majority of positive and negative emotions also remained highest among dependent drinkers irrespective of setting.”

 

Evidently the more one consumes in alcohol quantity, be it liquor/wine/beer/etc the influence will be more influential however to write and get accepted a paper stating that the sort of beverage influences the mood is beyond absurd.

The use of ‘report/reported’ in this paper should have been more than enough to have it refused for publication, how inebriated where the participants?

Unfortunately it’s not only with alcohol where self reporting has been used as a valid measurement.

I invite anyone to do a word search ‘report’ in the fields such as psychology, sociologically, psychiatry, air pollution or whichever field which can’t be empirically proven

Going by my personal experience self reporting in those fields is overwhelming

 

 

The ideological opposition to biological truth

The ideological opposition to biological truth

Facts? We Don’t Need No Stinking Facts.

One distressing characteristic of the Left, at least as far as science is concerned, is to let our ideology trump scientific data; that is, some of us ignore biological data when it’s inimical to our political preferences. This plays out in several ways: the insistence that race doesn’t exist (and before you accuse me of saying that races do exist, read about what I’ve written here before: the issue is complex), that there are no evolutionarily-based innate (e.g., genetically based) behavioral or psychological differences between ethnic groups, and that there are no such differences, either, between males and females within humans.

These claims are based not on biological data, but on ideological fears of the Left: if we admit of such differences, it could foster racism and sexism.  Thus. any group differences we do observe, whether they reside in psychology, physiology, or morphology, are to be explained on first principle as resulting from culture rather than genes. (I do of course recognize that culture can interact with genes to produce behaviors.) This ideological blinkering leads to the conclusion that when we see a difference in performance between groups and genders, the obvious explanation is culture and oppression, and the remedy is equal outcomes rather than equal opportunities. Yet in areas like most sports, where everyone agrees that males are on average larger and stronger than females, it’s clear that the behavioral differences (i.e., performance) result from biological differences that are surely based on evolution (see below). In sports like track and field or judo, nobody would think of making males compete with females.

Full Article

Dogs recognize dog and human emotions

Dogs recognize dog and human emotions

Abstract

The perception of emotional expressions allows animals to evaluate the social intentions and motivations of each other. This usually takes place within species; however, in the case of domestic dogs, it might be advantageous to recognize the emotions of humans as well as other dogs. In this sense, the combination of visual and auditory cues to categorize others’ emotions facilitates the information processing and indicates high-level cognitive representations. Using a cross-modal preferential looking paradigm, we presented dogs with either human or dog faces with different emotional valences (happy/playful versus angry/aggressive) paired with a single vocalization from the same individual with either a positive or negative valence or Brownian noise. Dogs looked significantly longer at the face whose expression was congruent to the valence of vocalization, for both conspecifics and heterospecifics, an ability previously known only in humans. These results demonstrate that dogs can extract and integrate bimodal sensory emotional information, and discriminate between positive and negative emotions from both humans and dogs.

biology letters

How scientists fool themselves – and how they can stop

How scientists fool themselves – and how they can stop

Humans are remarkably good at self-deception. But growing concern about reproducibility is driving many researchers to seek ways to fight their own worst instincts.

In 2013, five years after he co-authored a paper showing that Democratic candidates in the United States could get more votes by moving slightly to the right on economic policy1, Andrew Gelman, a statistician at Columbia University in New York City, was chagrined to learn of an error in the data analysis. In trying to replicate the work, an undergraduate student named Yang Yang Hu had discovered that Gelman had got the sign wrong on one of the variables.

Gelman immediately published a three-sentence correction, declaring that everything in the paper’s crucial section should be considered wrong until proved otherwise.

Reflecting today on how it happened, Gelman traces his error back to the natural fallibility of the human brain: “The results seemed perfectly reasonable,” he says. “Lots of times with these kinds of coding errors you get results that are just ridiculous. So you know something’s got to be wrong and you go back and search until you find the problem. If nothing seems wrong, it’s easier to miss it.”

This is the big problem in science that no one is talking about: even an honest person is a master of self-deception. Our brains evolved long ago on the African savannah, where jumping to plausible conclusions about the location of ripe fruit or the presence of a predator was a matter of survival. But a smart strategy for evading lions does not necessarily translate well to a modern laboratory, where tenure may be riding on the analysis of terabytes of multidimensional data. In today’s environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept ‘reasonable’ outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.

Failure to understand our own biases has helped to create a crisis of confidence about the reproducibility of published results, says statistician John Ioannidis, co-director of the Meta-Research Innovation Center at Stanford University in Palo Alto, California. The issue goes well beyond cases of fraud. Earlier this year, a large project that attempted to replicate 100 psychology studies managed to reproduce only slightly more than one-third2. In 2012, researchers at biotechnology firm Amgen in Thousand Oaks, California, reported that they could replicate only 6 out of 53 landmark studies in oncology and haematology3. And in 2009, Ioannidis and his colleagues described how they had been able to fully reproduce only 2 out of 18 microarray-based gene-expression studies4.

Nature Journal

Inceptionism: Going Deeper into Neural Networks

Inceptionism: Going Deeper into Neural Networks

Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t.

So let’s take a look at some simple techniques for peeking inside these networks.

We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want.

The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.

One of the challenges of neural networks is understanding what exactly goes on at each layer. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows.

For example, the first layer maybe looks for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, like a door or a leaf. The final few layers assemble those into complete interpretations—these neurons activate in response to very complex things such as entire buildings or trees.

One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana (see related work in [1], [2], [3], [4]). By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.

Google Research

Even Eminent Scientists can get dementia

Even Eminent Scientists can get dementia

(PhysOrg.com) — Eminent Australian scientist Professor Frank Fenner, who helped to wipe out smallpox, predicts humans will probably be extinct within 100 years, because of overpopulation, environmental destruction and climate change.Fenner, who is emeritus professor of microbiology at the Australian National University (ANU) in Canberra, said homo sapiens will not be able to survive the population explosion and “unbridled consumption,” and will become extinct, perhaps within a century, along with many other species. United Nations official figures from last year estimate the human population is 6.8 billion, and is predicted to pass seven billion next year.Fenner told The Australian he tries not to express his pessimism because people are trying to do something, but keep putting it off. He said he believes the situation is irreversible, and it is too late because the effects we have had on Earth since industrialization (a period now known to scientists unofficially as the Anthropocene) rivals any effects of ice ages or comet impacts.

Humans will be extinct in 100 years says eminent scientist

Although i do agree homo sapiens is due for extinction being an error of evolution (99% of lifeforms went extinct over time) his reasons why are patently ridiculous as is the timeframe.

Obesity both lowers and increases dementia risk

Obesity both lowers and increases dementia risk

Obese ‘have lower dementia risk

Interpretation
Being underweight in middle age and old age carries an increased risk of dementia over two decades. Our results contradict the hypothesis that obesity in middle age could increase the risk of dementia in old age. The reasons for and public health consequences of these findings need further investigation.

Funding: None

The Lancet
And wait for it:

Midlife overweight and obesity increase late-life dementia risk

Conclusions: Both overweight and obesity at midlife independently increase the risk of dementia, AD, and VaD. Genetic and early-life environmental factors may contribute to the midlife high adiposity–dementia association.

Funding: The National Institute on Aging (R01-AG08724), the Swedish Research Councils (FAS-09-0632), and the Swedish Brain Power. Also supported in part by funds from the Gamla Tjänarinnor, the Bertil Stohnes Foundation, the Demensfonden, the Loo and Hans Ostermans Foundation, and the Foundation for Geriatric Diseases at Karolinska Institutet.

Journal of Neurology

Great stuff, science….

Btw: Picture Healthy Centenarian