Agree Agree:  116
Likes Likes:  308
Page 105 of 106 FirstFirst ... 5558095101102103104105106 LastLast
Results 1,561 to 1,575 of 1585
  1. #1561

    Re: Let's Discuss Science

    Joey Roulette
    @joroulette·14m
    A Starship prototype just exploded in Boca Chica, Texas during static fire testing. SpaceX was granted an FAA license yesterday to conduct suborbital flights, not sure when those first test flights will happen. Video/live feed from
    @NASASpaceflight
    https://twitter.com/i/status/1266444972573818881
    “No matter how cynical I get, I just can't keep up.” – Lily Tomlin.




  2. #1562

    Re: Let's Discuss Science

    Rocket sience is still very difficult, 50 years after the first human flights...
    Roger forever

  3. #1563

    Re: Let's Discuss Science

    If you look at the total number of ALL space launches, and if you plot the number of fatalities that have taken place (2 shuttles, 1 burnt Apollo, and I do not know if there have been Russian fatalities) the FATALITY rate is extremely high. No commercial airliner would be allowed to take off with such numbers.
    It still takes a lot of courage to go up in one of those, specially since there are no MINOR space incidents. Apolo 13 non-withstanding (and that was not trivial either).
    Face it. It's the apocalypse.

  4. #1564
    Director of Nothing
    Forum Moderator

    Awards Showcase

    mmmm8's Avatar
    Join Date
    May 2006
    Location
    New York, New York, United States
    Posts
    51,580

    Re: Let's Discuss Science

    Quote Originally Posted by ponchi101 View Post
    If you look at the total number of ALL space launches, and if you plot the number of fatalities that have taken place (2 shuttles, 1 burnt Apollo, and I do not know if there have been Russian fatalities) the FATALITY rate is extremely high. No commercial airliner would be allowed to take off with such numbers.
    It still takes a lot of courage to go up in one of those, specially since there are no MINOR space incidents. Apolo 13 non-withstanding (and that was not trivial either).

    I think there were 4 Soviet (human) casualties in the 60s and 70s (over 2 incidents). I think before Gagarin went up, a few dogs died in space


  5. #1565

    Re: Let's Discuss Science

    It's not just human fatalities, also lots of on the ground technical problems like the one in Texas. I assume nobody died there.
    Roger forever

  6. #1566

    Re: Let's Discuss Science

    The next launch window is today at 15:22 EDT.
    Roger forever

  7. #1567

    Re: Let's Discuss Science

    Wasn't sure where to put this, so I placing it here. Do we have an environmental issues thread somewhere in the past?

    U.S. court blocks sales of Bayer's dicamba herbicide
    (Reuters) - German agrochemicals group Bayer AG has been blocked from selling its dicamba herbicide in the United States after an appeals court rejected a federal regulator’s permit for the product.

    The U.S. Appeals Court said on Wednesday that the U.S. Environmental Protection Agency (EPA) substantially understated the herbicide-related risks and entirely failed to look into other risks related to dicamba.

    Bayer said it strongly disagreed with the ruling.

    “Depending upon actions by the EPA and whether the ruling is successfully challenged, we will work quickly to minimize any impact on our customers this season,” the German company added.

    According to the ruling, the EPA had violated federal regulations and at no point identified if an extension of the herbicide registration by another two years could lead to a monopoly or near-monopoly.

    The EPA had imposed restrictions on the use of dicamba in November 2018 due to concerns about the potential damage to crops surrounding those it was being applied to.

    In February, Bayer had said it would appeal a U.S. jury’s $265 million damages award against it and BASF in favor of a Missouri farmer who said the company’s dicamba herbicide had destroyed his peach orchards.

    Bayer’s genetically engineered soy seeds are designed to be resistant to dicamba.
    "And for my next fearless prediction..."

  8. #1568

    Re: Let's Discuss Science

    I started a "Your own, personal Climate Change" thread years ago, but my idea was more on reporting things you could see yourself. It was after I saw how the Snowy Sierra in Venezuela had almost disappeared. The post above is more along the lines of this thread.
    Face it. It's the apocalypse.

  9. #1569

    Re: Let's Discuss Science

    George Floyd’s Autopsy and the Structural Gaslighting of America
    The weaponization of medical language emboldened white supremacy with the authority of the white coat. How will we stop it from happening again?

    By Ann Crawford-Roberts, Sonya Shadravan, Jennifer Tsai, Nicolás E. Barceló, Allie Gips, Michael Mensah, Nichole Roxas, Alina Kung, Anna Darby, Naya Misa, Isabella Morton, Alice Shen on June 6, 2020

    On May 29, the country was told that the autopsy of George Floyd “revealed no physical findings that support a diagnosis of traumatic asphyxiation,” and that “potential intoxicants” and preexisting cardiovascular disease “likely contributed to his death.” This requires clarification. Importantly, these commonly quoted phrases did not come from a physician, but were taken from a charging document that utilized politicized interpretations of medical information. As doctors, we wish to highlight for the public that this framing of the circumstances surrounding Floyd’s death was at best, a misinterpretation, and at worst, a deliberate obfuscation.

    A timeline of events illustrates how a series of omissions and commissions regarding Mr. Floyd’s initial autopsy results deceptively fractured the truth. On May 28, a statement released by the Hennepin County Medical Examiner’s office reported ongoing investigations and acknowledgement from the forensic pathologist that an “autopsy … must be interpreted in the context of the pertinent investigative information.” As per standardized medical examination, Floyd’s underlying health conditions and toxicology screen were documented. These are ordinary findings that do not suggest causation of death, yet headlines and the May 29 charging document falsely overstated the role of Floyd’s coronary artery disease and hypertension, which increase the risk of stroke and heart attack over years, not minutes. Asphyxia—suffocation—does not always demonstrate physical signs, as other physician groups have noted.

    Without this important medical context, however, the public was left to reconcile manipulated medical language with the evidence they had personally witnessed. Ultimately, the initial report overstated and misrepresented the role of chronic medical conditions, inappropriately alluded to intoxicants, and failed to acknowledge the stark reality that but for the defendant’s knee on George Floyd’s neck, he would not be dead today.

    By Monday, June 1, in the context of widespread political pressure, the public received two reports: the preliminary autopsy report commissioned by Floyd’s family by private doctors, and—shortly thereafter—a summary of the preliminary autopsy from the Hennepin County Medical Examiner’s Office. Both reports stated that the cause of Floyd’s death was homicide: death at the hands of another.

    By inaccurately portraying the medical findings from the autopsy of George Floyd, the legal system and media emboldened white supremacy, all under the cloak of authoritative scientific rhetoric. They took standard components of a preliminary autopsy report to cast doubt, to sow uncertainty; to gaslight America into thinking we didn’t see what we know we saw. In doing so, they perpetuated stereotypes about disease, risky behavior and intoxication in Black bodies to discredit a victim of murder. This state of affairs is not an outlier—it is part of a patterned and tactical distortion of facts wherein autopsy reports are manipulated to bury police violence and uphold white supremacy. As Ida B. Wells said, “Those who commit the murders write the reports.” A similar conflict of interest between police departments and medical examiners offices continues today.

    As physicians, we will not be complicit in the ongoing manipulation of medical expertise to erase government-sanctioned violence. Though we are relieved that two independent examinations invalidated the preliminary findings in the charging document and the headlines that deceitfully undermined Chauvin’s culpability in Floyd’s murder, our initial incense is not replaced by celebration.

    For three days, Black Americans sat—and still sit—with the all-too-familiar pangs of being told that the truth is not true. Of fearing that the law would believe a physician’s report over the reality they saw with their own eyes, and have lived with their own lives. It's a miscarriage of justice that deepens the cut; not only can Black people be killed with impunity; a physician’s autopsy report can be twisted to replace the truth.

    https://blogs.scientificamerican.com...ng-of-america/
    “No matter how cynical I get, I just can't keep up.” – Lily Tomlin.




  10. #1570

    Re: Let's Discuss Science

    Fifth State of Matter Created on International Space Station
    A fifth state of matter has been created in space, with scientists producing Bose-Einstein condensates onboard the International Space Station (ISS) in the Cold Atom Lab experiment.

    A Bose-Einstein condensate is a state of matter that forms when a group of atoms clumps together when cooled to absolute zero (-459.67 F). In this state, the atoms have quantum properties and offer an insight into quantum mechanics. Producing them and studying them on Earth, however, is difficult because of gravity.

    A Bose–Einstein condensate is a state of matter formed when a gas of bosons (such as rubidium atoms) is cooled down close to absolute zero. At this low temperature, the atoms become a single entity with quantum properties. Bose–Einstein condensates straddle the boundary between the microscopic world, governed by quantum mechanics, and the macroscopic world, governed by classical physics. As such, they may offer fundamental insights into quantum mechanics, but measuring them precisely is hampered by gravity. This force disrupts the magnetic fields required to hold them in a stable state to study. In space, with less gravity, this is far less of a problem.

    "This actually is something I've been trying to do for about 23 years now," Robert Thompson, from the California Institute of Technology, told Newsweek in an email.

    Thompson is one of the authors of a study published in Nature that describes the experiments on the ISS. The Cold Atom Lab uses several stages of cooling to produce the Bose–Einstein condensates. Lasers are used to cool atoms from room temperature down to around a millionth of a degree above absolute zero. From here, the atoms are moved into a magnetic trap, where the hottest ones are pulled off. The trap is then expanded, which cools the atoms further.

    After generating the Bose-Einstein condensates they were able to compare how they behave in microgravity and how this compares to those produced on Earth. Findings showed the ISS Bose-Einstein condensates could be observed for over a second. This is far longer than on Earth, where they only last tens of milliseconds.

    By being able to observe them for longer, scientists will be able to take better measurements of them. This will help with the study of ultracold atomic gasses, as well as our understanding of fundamental physics.


    Thompson said more experiments are planned for the Cold Atom Lab. This includes creating "spherical 'bubble' condensates which can only be observed in space," as well as looking at the characteristics of collisions of quantum objects. Researchers will also be looking to test Einstein's theory of relativity to rule out certain candidates for dark matter—the invisible form of matter thought to make up around 25 percent of the universe—and dark energy, which is thought to drive the expansion of the universe.

    "Several possible candidates for both dark matter and dark energy could potentially be observed with an atom interferometer," Thompson said. "One of our planned PI experiments...aims to search for signs of one particular candidate for dark energy, namely a chameleon field, which has a varying effective mass depending on the nearby energy density, and thus will experience a different acceleration if its near a massive object such as the edge of the vacuum chamber. This can be precisely measured by an atom interferometer, with sensitivity enhanced because microgravity allows the atoms to linger near the surface for an extended time."

    He said the team has also recently observed the first atom interferometer in space. In the experiment, a laser pulse was used as a beam-splitter so "each individual atom is effectively in two places at once." Pulses are then used to recombine the atoms.

    This, Thompson said, will form the basis of a new generation of precise quantum sensors. "This is the first matter-wave interferometer ever demonstrated in space, and heralds a future in which space-based quantum sensors become a widely used tool for scientists wishing to explore the universe," he said.
    "And for my next fearless prediction..."

  11. #1571

    Re: Let's Discuss Science

    Pretty cool. And a bit of a reminder that Einstein was more than relativity. He really covered a lot of ground.
    Face it. It's the apocalypse.

  12. #1572

    Re: Let's Discuss Science

    Good article about the fundamental limitations of today's machine learning and AI's which are not really AI's. This one is about cars, but I can attest that similar "lack of common sense" also manifests in my area and there is also a diminishing belief that everything could be computed. In science in addition to issues described below the fundamental limitation is that a computer discovers nothing new unless it's already in the data you have.


    Driverless cars show the limits of today’s AI
    They, and many other such systems, still struggle to handle the unexpected


    Jun 11th 2020
    In march Starsky Robotics, a self-driving lorry firm based in San Francisco, closed down. Stefan Seltz-Axmacher, its founder, gave several reasons for its failure. Investors’ interest was already cooling, owing to a run of poorly performing tech-sector ipos and a recession in the trucking business. His firm’s focus on safety, he wrote, did not go down well with impatient funders, who preferred to see a steady stream of whizzy new features. But the biggest problem was that the technology was simply not up to the job. “Supervised machine learning doesn’t live up to the hype. It isn’t actual artificial intelligence akin to c-3po [a humanoid robot from the “Star Wars” films]. It’s a sophisticated pattern-matching tool.”

    Policing social media, detecting fraud and defeating humans at ancient games are all very well. But building a vehicle that can drive itself on ordinary roads is—along with getting computers to conduct plausible conversations—one of the grand ambitions of modern ai. Some imagined driverless cars could do away with the need for car ownership by letting people summon robotaxis at will. They believe they would be safer, too. Computers never tire, and their attention never wanders. According to the who, over a million people a year die in car accidents caused by fallible human drivers. Advocates hoped to cut those numbers drastically.


    And they would do it soon. In 2015 Elon Musk, the boss of Tesla, an electric-car maker, predicted the arrival of “complete autonomy” by 2018. Cruise, a self-driving firm acquired by General Motors in 2016, had planned to launch self-driving taxis in San Francisco by 2019. Chris Urmson, then the boss of Waymo, a Google subsidiary widely seen as the market leader, said in 2015 that he hoped his son, then 11 years old, would never need a driving licence.

    But progress has lagged. In 2018 a self-driving car being tested by Uber, a ride-hailing service, became the first to kill a pedestrian when it hit a woman pushing a bicycle across a road in Arizona. Users of Tesla’s “Autopilot” software must, despite its name, keep their hands on the wheel and their eyes on the road (several who seem to have failed to do so have been killed in crashes). The few firms that carry passengers, such as Waymo in America and WeRide in China, are geographically limited and rely on human safety drivers. Mr Urmson, who has since left Waymo, now thinks that adoption will be slower and more gradual.

    Black swans and bitter lessons
    Self-driving cars work in the same way as other applications of machine learning. Computers crunch huge piles of data to extract general rules about how driving works. The more data, at least in theory, the better the systems perform. Tesla’s cars continuously beam data back to headquarters, where it is used to refine the software. On top of the millions of real-world miles logged by its cars, Waymo claims to have generated well over a billion miles-worth of data using ersatz driving in virtual environments.

    The problem, says Rodney Brooks, an Australian roboticist who has long been sceptical of grand self-driving promises, is deep-learning approaches are fundamentally statistical, linking inputs to outputs in ways specified by their training data. That leaves them unable to cope with what engineers call “edge cases”—unusual circumstances that are not common in those training data. Driving is full of such oddities. Some are dramatic: an escaped horse in the road, say, or a light aircraft making an emergency landing on a highway (as happened in Canada in April). Most are trivial, such as a man running out in a chicken suit. Human drivers usually deal with them without thinking. But machines struggle.

    One study, for instance, found that computer-vision systems were thrown when snow partly obscured lane markings. Another found that a handful of stickers could cause a car to misidentify a “stop” sign as one showing a speed limit of 45mph. Even unobscured objects can baffle computers when seen in unusual orientations: in one paper a motorbike was classified as a parachute or a bobsled. Fixing such issues has proved extremely difficult, says Mr Seltz-Axmacher. “A lot of people thought that filling in the last 10% would be harder than the first 90%”, he says. “But not that it would be ten thousand times harder.”

    Mary “Missy” Cummings, the director of Duke University’s Humans and Autonomy Laboratory, says that humans are better able to cope with such oddities because they can use “top-down” reasoning about the way the world works to guide them in situations where “bottom-up” signals from their senses are ambiguous or incomplete. ai systems mostly lack that capacity and are, in a sense, working with only half a brain. Though they are competent in their comfort zone, even trivial changes can be problematic. In the absence of the capacity to reason and generalise, computers are imprisoned by the same data that make them work in the first place. “These systems are fundamentally brittle,” says Dr Cummings.

    This narrow intelligence is visible in areas beyond just self-driving cars. Google’s “Translate” system usually does a decent job at translating between languages. But in 2018 researchers noticed that, when asked to translate 18 repetitions of the word “dog” into Yoruba (a language spoken in parts of Nigeria and Benin) and then back into English, it came up with the following: “Doomsday Clock is at three minutes to twelve. We are experiencing characters and dramatic developments in the world, which indicate that we are increasingly approaching the end times and Jesus’ return.”

    Gary Marcus, a professor of psychology at New York University, says that, besides its comedy value, the mistranslation highlights how Google’s system does not understand the basic structure of language. Concepts like verbs or nouns are alien, let alone the notion that nouns refer to physical objects in a real world. Instead, it has constructed statistical rules linking strings of letters in one language with strings of letters in another, without any understanding of the concepts to which those letters refer. Language processing, he says, is therefore still baffled by the sorts of questions a toddler would find trivial.

    How much those limitations matter varies from field to field. An automated system does not have to be better than a professional human translator to be useful, after all (Google’s system has since been tweaked). But it does set an upper bound on how useful chatbots or personal assistants are likely to become. And for safety-critical applications like self-driving cars, says Dr Cummings, ai’s limitations are potentially show-stopping.

    Researchers are beginning to ponder what to do about the problem. In a conference talk in December Yoshua Bengio, one of ai’s elder statesmen, devoted his keynote address to it. Current machine-learning systems, said Dr Bengio, “learn in a very narrow way, they need much more data to learn a new task than [humans], they need humans to provide high-level concepts through labels, and they still make really stupid mistakes”.

    Beyond deep learning
    Different researchers have different ideas about how to try to improve things. One idea is to widen the scope, rather than the volume, of what machines are taught. Christopher Manning, of Stanford University’s ai Lab, points out that biological brains learn from far richer data-sets than machines. Artificial language models are trained solely on large quantities of text or speech. But a baby, he says, can rely on sounds, tone of voice or tracking what its parents are looking at, as well as a rich physical environment to help it anchor abstract concepts in the real world. This shades into an old idea in ai research called “embodied cognition”, which holds that if minds are to understand the world properly, they need to be fully embodied in it, not confined to an abstracted existence as pulses of electricity in a data-centre.

    Biology offers other ideas, too. Dr Brooks argues that the current generation of ai researchers “fetishise” models that begin as blank slates, with no hand-crafted hints built in by their creators. But “all animals are born with structure in their brains,” he says. “That’s where you get instincts from.”

    Dr Marcus, for his part, thinks machine-learning techniques should be combined with older, “symbolic ai” approaches. These emphasise formal logic, hierarchical categories and top-down reasoning, and were most popular in the 1980s. Now, with machine-learning approaches in the ascendancy, they are a backwater.

    But others argue for persisting with existing approaches. Last year Richard Sutton, an ai researcher at the University of Alberta and DeepMind, published an essay called “The Bitter Lesson”, arguing that the history of ai shows that attempts to build human understanding into computers rarely work. Instead most of the field’s progress has come courtesy of Moore’s law, and the ability to bring ever more brute computational force to bear on a problem. The “bitter lesson” is that “the actual contents of [human] minds are tremendously, irredeemably complex…They are not what should be built in [to machines].”


    “This less ambitious stuff—I think that’s much more realistic”

    Away from the research labs, expectations around driverless cars are cooling. Some Chinese firms are experimenting with building digital guide rails into urban infrastructure, in an attempt to lighten the cognitive burden on the cars themselves. Incumbent carmakers, meanwhile, now prefer to talk about “driver-assistance” tools such as automatic lane-keeping or parking systems, rather than full-blown autonomous cars. A new wave of startups has deliberately smaller ambitions, hoping to build cars that drive around small, limited areas such as airports or retirement villages, or vehicles which trundle slowly along pavements, delivering packages under remote human supervision. “There’s a scientific reason we’re not going to get to full self-driving with our current technology,” says Dr Cummings. “This less ambitious stuff—I think that’s much more realistic.”

    https://www.economist.com/technology...s-of-todays-ai
    Roger forever

  13. #1573

    Re: Let's Discuss Science

    I recently finished HUMAN COMPATIBLE, another book on AI. Although it in reality does not give much hope about AI not gaining a foothold eventually, it goes along the lines explained above. AI is still some way off. Any time somebody toots the horn because some progress has been made, it makes the headlines. All the failures remain out of sight.
    Face it. It's the apocalypse.

  14. #1574

    Re: Let's Discuss Science

    I'd never claim that there will never be a true AI, but a more fundamental breakthrough is required. I think the current purely statistical model won't make it no matter how many microprocessors we cram on a chip.

    As for the self driving cars in particular the fundamental issue is that driving is a social behaviour tosome extent. They're trying to reduce it entirely to rules and failing.
    Roger forever

  15. #1575

    Re: Let's Discuss Science

    HUMAN COMPATIBLE talks a lot about systems that learn from their mistakes. AlphaGo, the program that beat the world champion in GO, played tens of thousands of games before it was able to reach a proper level of play. Then it continued to get better.
    Basically, the system learns from its failings. Which of course produces some serious questions when it comes to AI driving. If it is going to learn from its mistakes, that would mean a lot of accidents.
    But you and I agree. Sooner or later, AI will pop up. The singularity will be reached. And then, we better have Artificial MORALITY ready too. Because otherwise, that is where some big problems will arise.
    Face it. It's the apocalypse.

Page 105 of 106 FirstFirst ... 5558095101102103104105106 LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •