Agree Agree:  104
Likes Likes:  119
Page 53 of 53 FirstFirst ... 328434950515253
Results 781 to 788 of 788
  1. #781

    Re: Techno-babble Random Random

    Facebook says its top product executive, Chris Cox, is leaving, the highest-level departure in years
    By Elizabeth Dwoskin March 14 at 4:08 PM

    Facebook’s top executive in charge of all products, Chris Cox, the longtime confidant of chief executive Mark Zuckerberg, is leaving the company, the highest-level departure at the social media giant amid nearly two years of sustained crises.

    Cox’s unexpected departure, which he and Zuckerberg announced in separate Facebook posts Thursday, comes months after Cox was promoted in a major reorganization. In May, Cox was put in charge of Facebook’s “family of apps,” including Instagram, Messenger, WhatsApp and Facebook itself — which together have over 2.7 billion users worldwide. These apps have been distinct until recently, when Zuckerberg announced plans to unify them with a new focus on privacy.

    “It is with great sadness I share with you that after thirteen years, I’ve decided to leave the company,” Cox wrote in his post. “Since I was twenty-three, I’ve poured myself into these walls. This place will forever be a part of me.” Cox didn’t offer any explanation for his departure.

    In his blog post, Zuckerberg said that Cox had told him several years ago that he planned to move but that Cox decided to hold off on leaving until the company made more progress combating misinformation and Russian interference — controversies that erupted in the wake of the 2016 election.

    “At this point, we have made real progress on many issues and we have a clear plan for our apps, centered around making private messaging, stories and groups the foundation of the experience, including enabling encryption and interoperability across our services,” Zuckerberg wrote. “As we embark on this next major chapter, Chris has decided now is the time to step back from leading these teams.”

    The company is facing multiple federal investigations over data privacy, stagnating user growth in its most lucrative markets, and a record-low reputation with the public. It is also embarking on a major reshuffling of its leadership.

    In his post, Zuckerberg announced further reorganization of the company’s top ranks and the departure of Chris Daniels, another executive who ran, the company’s philanthropic project to promote global Internet access, and was recently promoted to lead WhatsApp.

    Cox, who dropped out of a Stanford University graduate degree program to work with Zuckerberg when the company had just 15 engineers, was widely seen as one of the most popular and capable executives at the social network — and a potential replacement CEO, were Zuckerberg to leave. (Zuckerberg has said he has no plans to do so.)

    Perhaps more than anyone else at Facebook — even more than Chief Operating Officer Sheryl Sandberg — Cox was a sounding board for Zuckerberg on product ideas. He launched Facebook’s flagship scrolling news feed nearly a decade ago and ran human resources before he was promoted to run the Facebook app in 2014.
    "Even if you dance for your enemy on the rock, he will accuse you of splashing water on him." ~ African Proverb

  2. #782

    Re: Techno-babble Random Random

    Facebook acknowledges concerns over Cambridge Analytica emerged earlier than reported
    Company confirms suspicions of separate incident following Washington DC attorney general court filing

    Julia Carrie Wong in San Francisco

    Facebook employees were aware of concerns about“improper data-gathering practices” by Cambridge Analytica months before the Guardian first reported, in December 2015, that the political consultancy had obtained data on millions from an academic. The concerns appeared in a court filing by the attorney general for Washington DC and were subsequently confirmed by Facebook.

    The new information “could suggest that Facebook has consistently mislead [sic]” British lawmakers “about what it knew and when about Cambridge Analytica”, tweeted Damian Collins, the chair of the House of Commons digital culture media and sport select committee (DCMS) in response to the court filing.

    In a statement, a company spokesperson said: “Facebook absolutely did not mislead anyone about this timeline.”

    After publication of this article, the spokesperson acknowledged that Facebook employees heard rumors of data scraping by Cambridge Analytica in September 2015. The spokesperson said that this was a “different incident” from Cambridge Analytica’s acquisition of a trove of data about as many as 87m users that has been widely reported on for the past year.

    “In September 2015 employees heard speculation that Cambridge Analytica was scraping data, something that is unfortunately common for any internet service,” the spokesperson said. “In December 2015, we first learned through media reports that Kogan sold data to Cambridge Analytica, and we took action. Those were two different things.”

    The filing raised questions about when Facebook first learned about the misuse of personal data by Cambridge Analytica, the now defunct political consultancy.

    This timeline has long been complicated by the different corporate entities involved in Cambridge Analytica’s data misuse. The data of as many as 87m people was extracted from Facebook by GSR, a company formed by the former Cambridge University academic Aleksandr Kogan, then transferred to Cambridge Analytica’s parent company, SCL.

    The data extraction, though highly controversial, was not against Facebook’s policies, which at the time allowed GSR to take information not only from users who consented but from all their friends. It was the transfer of the data from GSR to SCL that was against Facebook’s policies, the company has long maintained.

    After Cambridge Analytica’s acquisition of the data for political purposes became an international scandal, Mark Zuckerberg stated that Facebook “learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica” in 2015. The article detailing this data sharing was published on 11 December 2015.

    The attorney general for Washington DC sued Facebook over its failure to protect user data from Cambridge Analytica in late 2018. Facebook has sought to have the case dismissed and to seal a document – currently redacted in filings – that the DC attorney general cited as evidence in his opposition to the motion to dismiss.

    The document is “an email exchange between Facebook employees discussing how Cambridge Analytica (and others) violated Facebook’s policies”, according to a Monday court filing by the DC attorney general.

    Those emails include “candid employee assessments that multiple third-party applications accessed and sold consumer data in violation of Facebook’s policies during the 2016 United States Presidential Election”, according to the filing. “It also indicates Facebook knew of Cambridge Analytica’s improper data-gathering practices months before news outlets reported on the issue,” the filing continues.

    The filing further asserts that “as early as September 2015, a DC-based Facebook employee warned the company that Cambridge Analytica” was doing something that is currently redacted and “received responses” – also redacted – relating to “Cambridge Analytica’s data-scraping practices”.

    A Facebook spokesperson clarified after publication that there may have been two separate instances of data misuse by Cambridge Analytica. The data-scraping referenced in the filing was not the same data harvesting that has become synonymous with Cambridge Analytica’s name over the past year, the company said.

    “Facebook was not aware of the transfer of data from Kogan/GSR to Cambridge Analytica until December 2015, as we have testified under oath,” the Facebook spokesperson said. “These were two different incidents.”

    And while the email exchange appears to have been referenced in a report by parliament’s DCMS, it appears that the committee mistakenly assumed that the emails referred to the Kogan/GSR data, and not a separate Cambridge Analytica scraping incident.

    The report states: “We were keen to know when and which people working at Facebook first knew about the GSR/Cambridge Analytica breach. The ICO [Information Commissioner’s Office] confirmed, in correspondence with the Committee, that three ‘senior managers’ were involved in email exchanges earlier in 2015 concerning the GSR breach before December 2015, when it was first reported by The Guardian. At the request of the ICO, we have agreed to keep the names confidential, but it would seem that this important information was not shared with the most senior executives at Facebook, leading us to ask why this was the case.”

    Facebook will face off with the District of Columbia in court on Friday, where a judge will hear arguments over the company’s motion to dismiss the lawsuit. The judge may also decide then whether to keep the email exchange sealed.

    This article was updated to reflect new information provided by Facebook after publication.
    "Even if you dance for your enemy on the rock, he will accuse you of splashing water on him." ~ African Proverb

  3. #783
    Head Cheese
    Awards Showcase

    Kirkus's Avatar
    Join Date
    Aug 2004
    California, USA
    Blog Entries

    Re: Techno-babble Random Random

    I think Facebook is all talk and no action. Zuckerberg has said several times, usually after getting caught, that FB will make a better effort to protect user privacy. Then, not long after, there's another report of FB selling, or at least not protecting, user data.
    Oh Grigor. You silly man.

  4. #784

    Re: Techno-babble Random Random

    So who's laughing at MYSPACE right now?
    Starry starry night

  5. #785

    Re: Techno-babble Random Random

    Quote Originally Posted by ponchi101 View Post
    So who's laughing at MYSPACE right now?
    They did just lose everyone's data and uploads from roughly 2001 - 2015 during a server migration or something so...
    "Even if you dance for your enemy on the rock, he will accuse you of splashing water on him." ~ African Proverb

  6. #786

    Re: Techno-babble Random Random

    Quote Originally Posted by Ti-Amie View Post
    They did just lose everyone's data and uploads from roughly 2001 - 2015 during a server migration or something so...
    All 25 of them?!
    Starry starry night

  7. #787

    Re: Techno-babble Random Random

    Quote Originally Posted by ponchi101 View Post
    All 25 of them?!
    "Even if you dance for your enemy on the rock, he will accuse you of splashing water on him." ~ African Proverb

  8. #788

    Re: Techno-babble Random Random

    8chan looks like a terrorist recruiting site after the New Zealand shooting. Should the government treat it like one?

    By Drew Harwell and
    Craig Timberg March 22 at 4:32 PM
    As most of the world condemned last week’s mass shooting in New Zealand, a contrary story line emerged on 8chan, the online message board where the alleged shooter had announced the attack and urged others to continue the slaughter. “who should i kill?” one anonymous poster wrote. "I have never been this happy,” wrote another. “I am ready. I want to fight.”

    To experts in online extremism, the performance echoed another brand of terrorism — that carried out by Islamic militants who have long used the Web to mobilize followers and incite violence. Their tone, tactics and propaganda were eerily similar. The biggest difference was their ambitions: a white-supremacist uprising, instead of a Muslim caliphate.

    As Facebook, YouTube and other tech companies raced to contain the sounds and images of the gruesome shooting, 8chan helped it thrive, providing a no-holds-barred forum that further propelled the extremism and encouraged new attacks.

    The persistence of the talk of violence on 8chan has led some experts to call for tougher actions by the world’s governments, with some saying the site increasingly looks like the jihadi forums organized by the Islamic State and al-Qaeda — masters in flexing the Web’s power to spread their ideologies and recruit new terrorists. Critics of 8chan argue that the site, and others like it, may warrant a similar governmental response: close monitoring and, when talk turns to violence, law-enforcement investigation and intervention.

    The owner and administrators of 8chan, which is registered as a property of the Nevada-based company N.T. Technology, did not respond to multiple requests for comment through email addresses listed for the site, as well as a request placed through a founder of the site who said he remains in touch with Jim Watkins, an American who is based in the Philippines and owns the company.

    The 8chan site’s Twitter account said Saturday that it “is responding to law enforcement regarding the recent incident where many websites were used by a criminal to publicize his crime,” and said that it would not comment further. New Zealand police declined to comment on whether they had contacted 8chan.

    The 8chan administration is responding to law enforcement regarding the recent incident where many websites were used by a criminal to publicize his crime. We always comply with US law and won't comment further on this incident so as not to disrupt the ongoing investigation.

    — 8chan ( (@infinitechan) March 16, 2019
    But the brazenness of the threats of racist and anti-Muslim violence posted on 8chan poses a striking new challenge to a foundational idea of the Internet: that in all but the most extreme cases, such as child pornography, those hosting sites are not legally or morally responsible for the content others upload to them.

    Telecommunications companies in Australia and New Zealand already have taken the rare step of blocking Internet access to 8chan and some other sites. Public pressure is building as well on other companies, including some based in the United States, that provide the technical infrastructure for sites that espouse violence against Muslims, African Americans and Jews.

    “This is terrorism. It’s no different than what we see from ISIS,” said Joel Finkelstein, executive director of the Network Contagion Research Institute, which, in partnership with the Anti-Defamation League, studies how hateful ideas spread online. “The platforms are responsible if they are organizing and propagating terror.”

    A crackdown would mark an extraordinary step in confronting online extremism. Terrorism experts say U.S. law enforcement and intelligence agencies have been reluctant to treat white supremacists and right-wing groups as terrorist organizations because they typically include Americans among their ranks, creating complex legal and political issues. It’s a thorny issue for tech companies, too: Platforms such as Facebook and Twitter blocked white-supremacist content after the Charlottesville riots in 2017, a watershed moment that sparked a debate about censorship.

    Some are also skeptical that any effort to suppress such activity online would be successful, because the Web’s decentralized nature makes targeted takedowns difficult and allows hate groups to quickly retreat underground.

    The increasingly hateful tone of 8chan has become a cautionary tale for how corners of the Web can be radicalized. Launched in 2013, the site grew out of an exodus from the lightly moderated message board 4chan and quickly gained an audience as a cauldron for the extreme content few other sites are willing to support. The past week has marked a new low.

    “I’d never seen the whole board so happy about what had just happened. Fifty people are dead, and they’re in total ecstasy,” said 8chan’s founder, Fredrick Brennan, who said he stepped down as an administrator in 2016 and stopped working with the site’s ownership in December.

    Brennan said he has been stunned to see how little the current administrators have done to curb violent threats, and voiced remorse over his role in creating a site that now calls itself the "darkest reaches of the Internet.” But he worries there are no true technical solutions beyond a total redesign of the Web, focused around identification and moderation, that could undermine it as a venue for free expression.

    “The Internet as a whole is not made to be censored. It was made to be resilient,” Brennan said. "And as long as there’s a contingent of people who like this content, it will never go away.”

    A move to silence 8chan would clash with a key tenet of the Internet, enshrined in a landmark 1996 U.S. law, that allows Facebook, YouTube, Twitter and others to operate with minimal government interference. The Communications Decency Act sharply limits the legal liability of platforms for content their users post.

    But 8chan’s content in the aftermath of last week’s shooting has renewed debate over whether the Internet’s freewheeling culture has gone too far — and whether sites that harbor talk of white-supremacist violence should face the same depth of government scrutiny that previously seemed reserved for chat rooms frequented by members of Islamist terror cells.

    Federal authorities in the United States — mindful of constitutional protections for the free-speech rights of Americans and, in some case, their links to mainstream political actors — have long been reluctant to gather intelligence among potential domestic terrorists in the same intrusive ways they do among foreign terrorist groups, said Clinton Watts, a senior fellow at the Foreign Policy Research Institute and a former FBI counterterrorism expert.

    Though the alleged Christchurch shooter last week was an Australian and 8chan is operated from the Philippines, Watts said the site likely attracts Americans, making it part of one of the bureau’s legal blind spots in combating domestic terror.

    “These domestic extremists are organizing in the same way” as foreign Muslim extremists, using websites to inspire bloodshed, radicalize believers and even plan assaults, he said. There was one key difference in the political and legal dynamics, however: “Domestic terrorists vote. Foreign terrorists don’t.”

    It’s unclear just how closely law enforcement is surveying sites like 8chan already. The FBI said in a statement that, while “individuals often are radicalized by looking at propaganda on social media sites and in some cases may decide to carry out acts of violence … the FBI only investigates matters in which there is a potential threat to national security or a possible violation of federal law.”

    Any move to crack down on sites that host conversation, no matter how loathsome, will confront the constitutional protections for free speech and the conviction among many experts that suppressing talk in one portion of the Internet will only prompt its growth elsewhere online.

    There are an ever-growing number of technological options for evading government censors, obscuring identities, faking locations and posting identical copies of disfavored content, which makes any quest to crack down on perceived misbehavior daunting for authorities, if not impossible.

    The spread of the shooting videos last week was a classic example: Even Facebook and YouTube were overmatched by human users, organized in part on 8chan, and were unable to block the images of mass murder for days. Both companies said afterward that they struggled to control the crush of uploads in the hours after the attack but were taking steps to prevent a recurrence.

    “When you shut things down of that nature, another one springs up,” said Jonathan Albright, research director at Columbia University’s Tow Center for Digital Journalism. “What we’ve seen on 8chan is just on the surface.”

    Yet there’s less disagreement that the New Zealand shootings — two deadly attacks on mosques, including one live-streamed on Facebook — fit classic definitions of terrorism, meaning that the act was calculated to inspire public fear and spread an ideology. The platforms that helped spread videos of the killings, such as 8chan, played a role in that act that went beyond mere exchange of free speech as commonly understood, experts in online extremism said.

    Facebook’s former chief security officer, Alex Stamos, said the gunman’s tactics mimicked those of the Islamic State: committing an act of attention-grabbing mass violence, then bolstering and shaping that attention through technological means.

    “For all of his hatred of Muslims, he’s copying a Muslim supremacist organization,” Stamos said. “There’s a sad irony there.”

    Stamos is wary of government tactics that smack of censorship: He has long argued that any power you give to liberal Western democracies will be used by illiberal authoritarians to block legitimate speech. But he favors more aggressive law-enforcement monitoring of any site where terrorist acts are being planned.

    The FBI and other U.S. authorities for years have infiltrated the online sites of foreign terrorist organizations, as designated by the State Department, experts in political extremism said. This has included active monitoring of chats about jihadi themes, using false personas to engage potential terrorists in direct conversation and, in the most serious cases, taking action when violent plans appeared to be forming.

    “Thanks to the efforts of the companies and law enforcement, potential ISIS supporters got to the point where they couldn’t trust anybody they met online,” Stamos said. “They discouraged the hobbyists and left only real supporters in some of these online groups.”

    An anonymous audience for hate
    The anonymity of 8chan is its most critical feature — there are no profiles or post histories for users, who call themselves “anons,” making it difficult to know how many people visit the site, who they are, and whether their messages are legitimate threats or merely inflammatory posts intended to shock.

    The site portrays itself as a beacon of free speech and says it deletes only posts that clearly violate U.S. law, such as those featuring copyrighted material or child pornography. Its most active forum, the “politically incorrect” board “/pol/," features more than 12 million posts and runs rampant with images of disturbing violence, white-supremacist memes and far-right hate speech. Brennan estimates more than 100,000 people visit the site every week.

    8chan lists one administrator — Ron Watkins, the son of N.T. Technology owner Jim Watkins — and roughly a dozen programmers and “global volunteers.” Brennan said Jim Watkins owns other Internet businesses and has built a technical fortress to guard 8chan from potential takedowns: He owns nearly every component securing the site to the backbone of the Web, including its servers, which are scattered around the world.

    “You can send a complaint, but no one’s going to do anything. He owns the whole operation,” Brennan said. “It’s how he keeps people confused and guessing.”

    Watkins did not respond to repeated requests for comment.

    The site’s only revenue comes from a small group of donors and advertisers whom Brennan estimates pay about $100 a month, which he said is not enough to cover the site’s expenses. But Watkins is content to lose money, Brennan said, because he sees it as a pet project: “8chan is like a boat to Jim. It doesn’t matter if it makes money. He just enjoys using it.”

    The board has grown increasingly fanatical, Brennan said, as its user base of early trolls and Internet libertarians have ceded ground to the “committed Nazis” who now dominate the site. In previous mass shootings, he said, the board often fueled anti-Semitic conspiracy theories that painted the attacks as faked. The Christchurch shooting marked the first moment Brennan said that most users portrayed an attack as a point of pride and a step towards their goal of a global race war.

    Posters have pushed each other to flood the New Zealand police email inboxes with images of gore and pornography, to widely distribute the gunman’s writing, and to spray-paint a neo-Nazi symbol onto “Muslim-run” schools and businesses. Many glorified the gunman as a “hero” and said they would hang posters around their neighborhoods of a meme showing the gunman with his rifle and manifesto in a messianic pose, a halo of sun around his helmet camera. “This guy is the only person I’ve ever truly admired/looked up to in my life," one poster wrote.

    Posters this week shared the names and addresses of religious centers they said they intended to target, as well as tips for future shooters on how to improve their videos for more “amazing kill shots … [and] details many of us are salivating for.” Links and memes of the gunman’s video and manifesto could be found virtually everywhere, as well as threats and eager calls to carry out more violence. “Invaders,” one poster wrote, had 90 days to leave the United States and other countries or “be executed on the spot.”

    Some 8chan posters hinted at even more private gathering places online. When one poster who said he was a white nationalist “highly inspired” by the killings asked where the board’s plans were for “accelerating” the gunman’s plan, another poster wrote that “we don’t discuss that here,” but on a dark-web site available only to those “that prove themselves.”

    Brennan said 8chan is only the most visible corner of a vast network of privately organized sites that shelter and fuel extremist thought. And while he believes 8chan and sites like it should enforce stricter moderation for violent messages, he also worries about a broad shift toward censorship that could push people further into the digital shadows: dark-web sites, secret chat rooms and decentralized file-sharing networks that are even harder to monitor and shut down.

    Brennan expects there will be another shooting because of 8chan, and he said he’s seen nothing from leaders there to suggest they would begin cracking down on incitements of brutality. Some of the people expected to moderate the site, he said, subscribe to extreme beliefs themselves. “It’s like having the lunatics run the asylum,” he said.

    ‘An extraordinary response’
    The enduring extremism on 8chan reveals what experts say has become an existential crisis for the Web: how the empowering freedom of digital connectivity can rally the most dismal and dangerous viewpoints together, often anonymously and consequence-free.

    It also highlights how even the biggest improvements from tech giants such as Facebook and YouTube, which have in recent days terminated hundreds of accounts “created to promote or glorify the shooter,” will do little to limit vile speech on a global stage.

    The sites’ anonymity can have real-world impact. Public school campuses in Charlottesville closed for two days this week after threats of an “ethnic cleansing” at a high school there surfaced Wednesday on 4chan.

    Internet service providers in Australia and New Zealand, which temporarily blocked access to 8chan, 4chan and other forum and video sites that hosted the shooting footage, showed one potential technical remedy. Telstra, Australia’s largest telecommunications company, said it took action following a request from the New Zealand government, which says sharing the content is a criminal offense. Nikos Katinakis, a top Telstra executive, said that while some sites have removed the content and seen their blocks lifted, 8chan remains blocked. “Extraordinary circumstances … required an extraordinary response,” he said in a statement.

    8chan, however, is shielded in another way: the U.S. web-services giant Cloudflare, which helps websites guard against “distributed denial of service,” or DDoS, attacks that online vigilante groups have used to target 8chan in the past.

    Cloudflare says that it helps 8chan and other websites regardless of their content, as long as they don’t violate U.S. laws, and that the company complies with court orders, works with law enforcement and bans terrorist propaganda networks and other groups on official sanction lists. Cloudflare would not discuss specific business or financial details about its relationship with 8chan.

    After the Charlottesville riots, Cloudflare stopped working with the neo-Nazi site Daily Stormer, a ban that led Cloudflare chief Matthew Prince to later question whether he had set a dangerous political precedent.

    Alissa Starzak, Cloudflare’s head of policy, said the role of policing should be left to the companies, governments or content moderators. She questioned the free-speech ramifications for revoking services from websites hosting content with which the company disagrees. “It’s still going to be on the Internet,” she said. “They might be more open to a DDoS attack, but is that the goal? A vigilante attack?”

    Alice Crites and Devlin Barrett contributed to this report.
    "Even if you dance for your enemy on the rock, he will accuse you of splashing water on him." ~ African Proverb

Page 53 of 53 FirstFirst ... 328434950515253


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts