“Information Warfare” Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach

A whistleblower has revealed to the Observer how Cambridge Analytica – a company owned by the hedge fund billionaire Robert Mercer, and headed at the time by Trump’s key adviser Steve Bannon – used personal information taken without authorisation in early 2014 to build a system that could profile individual US voters, in order to target them with personalised political advertisements.


Whistleblower describes how firm linked to former Trump adviser Steve Bannon compiled user data to target American voters

The data analytics firm that worked with Donald Trump’s election team and the winning Brexit campaign harvested millions of Facebook profiles of US voters, in one of the tech giant’s biggest ever data breaches, and used them to build a powerful software program to predict and influence choices at the ballot box.

A whistleblower has revealed to the Observer how Cambridge Analytica – a company owned by the hedge fund billionaire Robert Mercer, and headed at the time by Trump’s key adviser Steve Bannon – used personal information taken without authorisation in early 2014 to build a system that could profile individual US voters, in order to target them with personalised political advertisements.

Christopher Wylie, who worked with a Cambridge University academic to obtain the data, told the Observer: “We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.”

Documents seen by the Observer, and confirmed by a Facebook statement, show that by late 2015 the company had found out that information had been harvested on an unprecedented scale. However, at the time it failed to alert users and took only limited steps to recover and secure the private information of more than 50 million individuals.

The New York Times is reporting that copies of the data harvested for Cambridge Analytica could still be found online; its reporting team had viewed some of the raw data.

The data was collected through an app called thisisyourdigitallife, built by academic Aleksandr Kogan, separately from his work at Cambridge University. Through his company Global Science Research (GSR), in collaboration with Cambridge Analytica, hundreds of thousands of users were paid to take a personality test and agreed to have their data collected for academic use.

However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions-strong. Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising. The discovery of the unprecedented data harvesting, and the use to which it was put, raises urgent new questions about Facebook’s role in targeting voters in the US presidential election. It comes only weeks after indictments of 13 Russians by the special counsel Robert Mueller which stated they had used the platform to perpetrate “information warfare” against the US.

Cambridge Analytica and Facebook are one focus of an inquiry into data and politics by the British Information Commissioner’s Office. Separately, the Electoral Commission is also investigating what role Cambridge Analytica played in the EU referendum.


“We are investigating the circumstances in which Facebook data may have been illegally acquired and used,” said the information commissioner Elizabeth Denham. “It’s part of our ongoing investigation into the use of data analytics for political purposes which was launched to consider how political parties and campaigns, data analytics companies and social media platforms in the UK are using and analysing people’s personal information to micro-target voters.”


On Friday, four days after the Observer sought comment for this story, but more than two years after the data breach was first reported, Facebook announced that it was suspending Cambridge Analytica and Kogan from the platform, pending further information over misuse of data. Separately, Facebook’s external lawyers warned the Observer it was making “false and defamatory” allegations, and reserved Facebook’s legal position.

The revelations provoked widespread outrage. The Massachusetts Attorney General Maura Healey announced that the state would be launching an investigation. “Residents deserve answers immediately from Facebook and Cambridge Analytica,” she said on Twitter.

The Democratic senator Mark Warner said the harvesting of data on such a vast scale for political targeting underlined the need for Congress to improve controls. He has proposed an Honest Ads Act to regulate online political advertising the same way as television, radio and print. “This story is more evidence that the online political advertising market is essentially the Wild West. Whether it’s allowing Russians to purchase political ads, or extensive micro-targeting based on ill-gotten user data, it’s clear that, left unregulated, this market will continue to be prone to deception and lacking in transparency,” he said.

Last month both Facebook and the CEO of Cambridge Analytica, Alexander Nix, told a parliamentary inquiry on fake news: that the company did not have or use private Facebook data.

Simon Milner, Facebook’s UK policy director, when asked if Cambridge Analytica had Facebook data, told MPs: “They may have lots of data but it will not be Facebook user data. It may be data about people who are on Facebook that they have gathered themselves, but it is not data that we have provided.”

Cambridge Analytica’s chief executive, Alexander Nix, told the inquiry: “We do not work with Facebook data and we do not have Facebook data.”

Wylie, a Canadian data analytics expert who worked with Cambridge Analytica and Kogan to devise and implement the scheme, showed a dossier of evidence about the data misuse to the Observer which appears to raise questions about their testimony. He has passed it to the National Crime Agency’s cybercrime unit and the Information Commissioner’s Office. It includes emails, invoices, contracts and bank transfers that reveal more than 50 million profiles – mostly belonging to registered US voters – were harvested from the site in one of the largest-ever breaches of Facebook data. Facebook on Friday said that it was also suspending Wylie from accessing the platform while it carried out its investigation, despite his role as a whistleblower.

At the time of the data breach, Wylie was a Cambridge Analytica employee, but Facebook described him as working for Eunoia Technologies, a firm he set up on his own after leaving his former employer in late 2014.

The evidence Wylie supplied to UK and US authorities includes a letter from Facebook’s own lawyers sent to him in August 2016, asking him to destroy any data he held that had been collected by GSR, the company set up by Kogan to harvest the profiles.

That legal letter was sent several months after the Guardian first reported the breach and days before it was officially announced that Bannon was taking over as campaign manager for Trump and bringing Cambridge Analytica with him.

“Because this data was obtained and used without permission, and because GSR was not authorised to share or sell it to you, it cannot be used legitimately in the future and must be deleted immediately,” the letter said.

Facebook did not pursue a response when the letter initially went unanswered for weeks because Wylie was travelling, nor did it follow up with forensic checks on his computers or storage, he said.

“That to me was the most astonishing thing. They waited two years and did absolutely nothing to check that the data was deleted. All they asked me to do was tick a box on a form and post it back.”

Paul-Olivier Dehaye, a data protection specialist, who spearheaded the investigative efforts into the tech giant, said: “Facebook has denied and denied and denied this. It has misled MPs and congressional investigators and it’s failed in its duties to respect the law.

“It has a legal obligation to inform regulators and individuals about this data breach, and it hasn’t. It’s failed time and time again to be open and transparent.”



A majority of American states have laws requiring notification in some cases of data breach, including California, where Facebook is based.

Facebook denies that the harvesting of tens of millions of profiles by GSR and Cambridge Analytica was a data breach. It said in a statement that Kogan “gained access to this information in a legitimate way and through the proper channels” but “did not subsequently abide by our rules” because he passed the information on to third parties.

Facebook said it removed the app in 2015 and required certification from everyone with copies that the data had been destroyed, although the letter to Wylie did not arrive until the second half of 2016. “We are committed to vigorously enforcing our policies to protect people’s information. We will take whatever steps are required to see that this happens,” Paul Grewal, Facebook’s vice-president, said in a statement. The company is now investigating reports that not all data had been deleted.

Kogan, who has previously unreported links to a Russian university and took Russian grants for research, had a licence from Facebook to collect profile data, but it was for research purposes only. So when he hoovered up information for the commercial venture, he was violating the company’s terms. Kogan maintains everything he did was legal, and says he had a “close working relationship” with Facebook, which had granted him permission for his apps.

The Observer has seen a contract dated 4 June 2014, which confirms SCL, an affiliate of Cambridge Analytica, entered into a commercial arrangement with GSR, entirely premised on harvesting and processing Facebook data. Cambridge Analytica spent nearly $1m on data collection, which yielded more than 50 million individual profiles that could be matched to electoral rolls. It then used the test results and Facebook data to build an algorithm that could analyse individual Facebook profiles and determine personality traits linked to voting behaviour.

The algorithm and database together made a powerful political tool. It allowed a campaign to identify possible swing voters and craft messages more likely to resonate.

“The ultimate product of the training set is creating a ‘gold standard’ of understanding personality from Facebook profile information,” the contract specifies. It promises to create a database of 2 million “matched” profiles, identifiable and tied to electoral registers, across 11 states, but with room to expand much further.

At the time, more than 50 million profiles represented around a third of active North American Facebook users, and nearly a quarter of potential US voters. Yet when asked by MPs if any of his firm’s data had come from GSR, Nix said: “We had a relationship with GSR. They did some research for us back in 2014. That research proved to be fruitless and so the answer is no.”

Cambridge Analytica said that its contract with GSR stipulated that Kogan should seek informed consent for data collection and it had no reason to believe he would not.

GSR was “led by a seemingly reputable academic at an internationally renowned institution who made explicit contractual commitments to us regarding its legal authority to license data to SCL Elections”, a company spokesman said.

SCL Elections, an affiliate, worked with Facebook over the period to ensure it was satisfied no terms had been “knowingly breached” and provided a signed statement that all data and derivatives had been deleted, he said. Cambridge Analytica also said none of the data was used in the 2016 presidential election.

Steve Bannon’s lawyer said he had no comment because his client “knows nothing about the claims being asserted”. He added: “The first Mr Bannon heard of these reports was from media inquiries in the past few days.” He directed inquires to Nix.


from: https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election



How Cambridge Analytica turned Facebook ‘likes’ into a lucrative political tool

The algorithm used in the Facebook data breach trawled though personal data for information on sexual orientation, race, gender – and even intelligence and childhood trauma

The algorithm at the heart of the Facebook data breach sounds almost too dystopian to be real. It trawls through the most apparently trivial, throwaway postings –the “likes” users dole out as they browse the site – to gather sensitive personal information about sexual orientation, race, gender, even intelligence and childhood trauma.

A few dozen “likes” can give a strong prediction of which party a user will vote for, reveal their gender and whether their partner is likely to be a man or woman, provide powerful clues about whether their parents stayed together throughout their childhood and predict their vulnerability to substance abuse. And it can do all this without an need for delving into personal messages, posts, status updates, photos or all the other information Facebook holds.

Some results may sound more like the result of updated online sleuthing than sophisticated data analysis; “liking” a political campaign page is little different from pinning a poster in a window.



But five years ago psychology researchers showed that far more complex traits could be deduced from patterns invisible to a human observer scanning through profiles. Just a few apparently random “likes” could form the basis for disturbingly complex character assessments.

When users liked “curly fries” and Sephora cosmetics, this was said to give clues to intelligence; Hello Kitty likes indicated political views; “Being confused after waking up from naps” was linked to sexuality.

These were just some of the unexpected but consistent correlations noted in a paper in the Proceedings of the National Academy of Sciences journal in 2013. “Few users were associated with ‘likes’ explicitly revealing their attributes. For example, less than 5% of users labelled as gay were connected with explicitly gay groups, such as No H8 Campaign,” the peer-reviewed research found.

The researchers, Michal Kosinski, David Stillwell and Thore Graepel, saw the dystopian potential of the study and raised privacy concerns. At the time Facebook “likes” were public by default.

“The predictability of individual attributes from digital records of behaviour may have considerable negative implications, because it can easily be applied to large numbers of people without their individual consent and without them noticing,” they said.

“Commercial companies, governmental institutions, or even your Facebook friends could use software to infer attributes such as intelligence, sexual orientation or political views that an individual may not have intended to share.”

To some, that may have sounded like a business opportunity. By early 2014, Cambridge Analytica chief executive Alexander Nix had signed a deal with one of Kosinski’s Cambridge colleagues, lecturer Aleksandr Kogan, for a private commercial venture, separate from Kogan’s duties at the university, but echoing Kosinski’s work.

The academic had developed a Facebook app which featured a personality quiz, and Cambridge Analytica paid for people to take it, advertising on platforms such as Amazon’s Mechanical Turk.

The app recorded the results of each quiz, collected data from the taker’s Facebook account – and, crucially, extracted the data of their Facebook friends as well.

The results were paired with each quiz-taker’s Facebook data to seek out patterns and build an algorithm to predict results for other Facebook users. Their friends’ profiles provided a testing ground for the formula and, more crucially, a resource that would make the algorithm politically valuable.



Aleksandr Kogan


To be eligible to take the test the user had to have a Facebook account and be a US voter, so tens of millions of the profiles could be matched to electoral rolls. From an initial trial of 1,000 “seeders”, the researchers obtained 160,000 profiles – or about 160 per person. Eventually a few hundred thousand paid test-takers would be the key to data from a vast swath of US voters.

It was extremely attractive. It could also be deemed illicit, primarily because Kogan did not have permission to collect or use data for commercial purposes. His permission from Facebook to harvest profiles in large quantities was specifically restricted to academic use.

And although the company at the time allowed apps to collect friend data, it was only for use in the context of Facebook itself, to encourage interaction. Selling that data on, or putting it to other purposes, – including Cambridge Analytica’s political marketing – was strictly barred.

It also appears likely the project was breaking British data protection laws, which ban sale or use of personal data without consent. That includes cases where consent is given for one purpose but data is used for another.

The paid test-takers signed up to T&Cs, including collection of their own data, and Facebook’s default terms allowed their friends’ data to be collected by an app, unless they had changed their privacy settings. But none of them agreed to their data possibly being used to create a political marketing tool or to it being placed in a vast campaign database.

Kogan maintains everything he did was legal and says he had a “close working relationship” with Facebook, which had granted him permission for his apps.

Facebook denies this was a data breach. Vice-president Paul Grewal said: “Protecting people’s information is at the heart of everything we do, and we require the same from people who operate apps on Facebook. If these reports are true, it’s a serious abuse of our rules.”



The scale of the data collection Cambridge Analytica paid for was so large it triggered an automatic shutdown of the app’s ability to harvest profiles. But Kogan told a colleague he “spoke with an engineer” to get the restriction lifted and, within a day or two, work resumed.

Within months, Kogan and Cambridge Analytica had a database of millions of US voters that had its own algorithm to scan them, identifying likely political persuasions and personality traits. They could then decide who to target and craft their messages that was likely to appeal to them for those individuals – a political approach known as “micro-targeting”.

Facebook announced on Friday that it was suspending Cambridge Analytica and Kogan from the platform pending information over misuse of data related to this project.

Facebook denies that the harvesting of tens of millions of profiles by GSR and Cambridge Analytica was a data breach.

It said in a statement that Kogan “gained access to this information in a legitimate way and through the proper channels”, but “did not subsequently abide by our rules” because he passed the information onto third parties.


from: https://www.theguardian.com/technology/2018/mar/17/facebook-cambridge-analytica-kogan-data-algorithm



Facebook has suspended the account of the whistleblower who exposed Cambridge Analytica


Tech hath no fury like a multi-billion dollar social media giant scorned.

In the latest turn of the developing scandal around how Facebook’s user data wound up in the hands of Cambridge Analytica — for use in the in development in psychographic profiles that may or may not have played a part in the election victory of Donald Trump — the company has taken the unusual step of suspending the account of the whistleblower who helped expose the issues.



In a fantastic profile in The Guardian, Wylie revealed himself to be the architect of the technology that Cambridge Analytica used to develop targeted advertising strategies that arguably helped sway the U.S. presidential election.

A self-described gay, Canadian vegan, Wylie eventually became — as he told The Guardian — the developer of “Steve Bannon’s psychological warfare mindfuck tool.”

The goal, as The Guardian reported, was to combine social media’s reach with big data analytical tools to create psychographic profiles that could then be manipulated in what Bannon and Cambridge Analytica investor Robert Mercer allegedly referred to as a military-style psychological operations campaign — targeting U.S. voters.

In a series of Tweets late Saturday, Wylie’s former employer, Cambridge Analytica, took issue with Wylie’s characterization of events (and much of the reporting around the stories from The Times and The Guardian). 



Meanwhile, Cadwalldr noted on Twitter earlier today she’d received a phone call from the aggrieved whistleblower.



Facebook has since weighed in with a statement of its own, telling media outlets:

“Mr. Wylie has refused to cooperate with us until we lift the suspension on his account. Given he said he ‘exploited Facebook to harvest millions of people’s profiles,’ we cannot do this at this time.

“We are in the process of conducting a comprehensive internal and external review as we work to determine the accuracy of the claims that the Facebook data in question still exists. That is where our focus lies as we remain committed to vigorously enforcing our policies to protect people’s information.”


from: https://techcrunch.com/2018/03/18/facebook-has-suspended-the-account-of-the-whistleblower-who-exposed-cambridge-analytica/



Facebook suspends Cambridge Analytica, the data analysis firm that worked on the Trump campaign



Facebook announced late Friday that it had suspended the account of Strategic Communication Laboratories, and its political data analytics firm Cambridge Analytica — which used Facebook data to target voters for President Donald Trump’s campaign in the 2016 election. In a statement released by Paul Grewal, the company’s vice president and deputy general counsel, Facebook explained that the suspension was the result of a violation of its platform policies. The company noted that the very unusual step of a public blog post explaining the decision to act against Cambridge Analytica was due to “the public prominence of this organization.”

Facebook claims that back in 2015 Cambridge Analytica obtained Facebook user information without approval from the social network through work the company did with a University of Cambridge psychology professor named Dr. Aleksandr Kogan. Kogan developed an app called “thisisyourdigitallife” that purported to offer a personality prediction in the form of “a research app used by psychologists.”

Apparently around 270,000 people downloaded the app, which used Facebook Login and granted Kogan access to users’ geographic information, content they had liked, and limited information about users’ friends. While Kogan’s method of obtaining personal information aligned with Facebook’s policies, “he did not subsequently abide by our rules,” Grewal stated in the Facebook post.


“By passing information on to a third party, including SCL/Cambridge Analytica and Christopher Wylie of Eunoia Technologies, he violated our platform policies. When we learned of this violation in 2015, we removed his app from Facebook and demanded certifications from Kogan and all parties he had given data to that the information had been destroyed. Cambridge Analytica, Kogan and Wylie all certified to us that they destroyed the data.”


Facebook said it first identified the violation in 2015 and took action — apparently without informing users of the violation. The company demanded that Kogan, Cambridge Analytica and Wylie certify that they had destroyed the information.

Over the past few days, Facebook said it received reports (from sources it would not identify) that not all of the data Cambridge Analytica, Kogan, and Wylie collected had been deleted. While Facebook investigates the matter further, the company said it had taken the step to suspend the Cambridge Analytica account as well as the accounts of Kogan and Wylie.

Depending on who you ask, UK-based Cambridge Analytica either played a pivotal role in the U.S. presidential election or cooked up an effective marketing myth to spin into future business. Last year, a handful of former Trump aides and Republican consultants dismissed the potency of Cambridge Analytica’s so-called secret sauce as “exaggerated” in a profile by the New York Times. A May 2017 profile in the Guardian that painted the Robert Mercer-funded data company as shadowy and all-powerful resulted in legal action on behalf of Cambridge Analytica. Last October, the Daily Beast reported that Cambridge Analytica’s chief executive Alexander Nix contacted Wikileaks’ Julian Assange with an offer to help disseminate Hillary Clinton’s controversial missing emails.

In an interview with TechCrunch late last year, Nix said that his company had detailed hundreds of thousands of profiles of Americans throughout 2014 and 2015 (the time when the company was working with Sen. Ted Cruz on his presidential campaign).


…We used psychographics all through the 2014 midterms. We used psychographics all through the Cruz and Carson primaries. But when we got to Trump’s campaign in June 2016, whenever it was, there it was there was five and a half months till the elections. We just didn’t have the time to rollout that survey. I mean, Christ, we had to build all the IT, all the infrastructure. There was nothing. There was 30 people on his campaign. Thirty. Even Walker it had 160 (it’s probably why he went bust). And he was the first to crash out. So as I’ve said to other of your [journalist] colleagues, clearly there’s psychographic data that’s baked-in to legacy models that we built before, because we’re not reinventing the wheel. [We’ve been] using models that are based on models, that are based on models, and we’ve been building these models for nearly four years. And all of those models had psychographics in them. But did we go out and rollout a long form quantitive psychographics survey specifically for Trump supporters? No. We just didn’t have time. We just couldn’t do that.

The key implication here is that data leveraged in the Trump campaign could have originated with Kogan before being shared to Cambridge Analytica in violation of Facebook policy. The other implication is that Cambridge Analytica may not have destroyed that data back in 2015.
The tools that Cambridge Analytica deployed have been at the heart of recent criticism of Facebook’s approach to handling advertising and promoted posts on the social media platform.Nix credits the fact that advertising was ahead of most political messaging and that traditional political operatives hadn’t figured out that the tools used for creating ad campaigns could be so effective in the political arena.

“There’s no question that the marketing and advertising world is ahead of the political marketing the political communications world,” Nix told TechCrunch last year. “…There are some things which [are] best practice digital advertising, best practice communications which we’re taking from the commercial world and are bringing into politics.”

Responding to the allegations, Cambridge Analytica sent the following statement.


In 2014, SCL Elections contracted Dr. Kogan via his company Global Science Research (GSR) to undertake a large scale research project in the US. GSR was contractually committed to only obtain data in accordance with the UK Data Protection Act and to seek the informed consent of each respondent. GSR were also contractually the Data Controller (as per Section 1(1) of the Data Protection Act) for any collected data. The language in the SCL Elections contract with GSR is explicit on these points. GSR subsequently obtained Facebook data via an API provided by Facebook. When it subsequently became clear that the data had not been obtained by GSR in line with Facebook’s terms of service, SCL Elections deleted all data it had received from GSR. For the avoidance of doubt, no data from GSR was used in the work we did in the 2016 US presidential election.

Under Section 55 of the Data Protection Act (Unlawful obtaining etc. of personal data), a criminal offense has not been committed if a person has acted in the reasonable belief that he had in law the right to obtain data. GSR was a company led by a seemingly reputable academic at an internationally renowned institution who made explicit contractual commitments to us regarding the its legal authority to license data to SCL Elections. It would be entirely incorrect to attempt to claim that SCL Elections

illegally acquired Facebook data. Indeed SCL Elections worked with Facebook over this period to ensure that they were satisfied that SCL Elections had not knowingly breached any of Facebook’s Terms of Service and also provided a signed statement to confirm that all Facebook data and their derivatives had been deleted.

Cambridge Analytica and SCL Elections do not use or hold Facebook data.


from: https://techcrunch.com/2018/03/16/facebook-suspends-cambridge-analytica-the-data-analysis-firm-that-worked-for-the-trump-campaign/



Facebook is using us. It is actively giving away our information. It is creating an echo chamber in the name of connection. It surfaces the divisive and destroys the real reason we began using social media in the first place – human connection.

It is a cancer.

I’ve begun the slow process of weaning myself off of the platform by methodically running a script that will delete my old content. And there’s a lot. There are likes and shares. There are long posts I wrote to impress my friends. There are thousands of WordPress notifications that tell the world what I’m doing. In fact, I would wager I use Facebook more to broadcast my ego than interact with real humans. And I suspect that most of us are in a similar situation.

There is a method to my madness. I like Facebook Messenger and I like that Facebook is now a glorified version of OAuth. It’s a useful tool when it is stripped of its power. However, when it is larded with my personal details it is a weapon and a liability.

Think about it: any posts older than about a week are fodder for bots and bad actors. Posts from 2016? 2017? Why keep them? No one will read them, no one cares about them. Those “You and Joe have known each other for five years” auto-posts are fun but does anyone care? Ultimately you’ve created the largest dossier on yourself and you’ve done it freely, even gleefully. This dossier reflects your likes, your dislikes, your feelings, and political leanings. It includes clear pictures of your face from all angles, images of your pets and family, and details your travels. You are giving the world unfettered access to your life. It’s wonderful to imagine that this data will be used by a potential suitor who will fall in love with your street style. It’s wonderful to imagine you will scroll through Facebook at 80 and marvel at how you looked at the turn of the century. It’s wonderful to imagine that Facebook is a place to share ideas, dreams, and hopes, a human-to-human connection engine that gives more than it takes.

None of that will happen.

Facebook is a data collection service for those who want to sell you products. It is the definitive channel to target you based on age, sex, geographic location, political leanings, interests, and marital status. It’s an advertiser’s dream and it is wildly expensive in terms of privacy lost and cash spent to steal that privacy. It is the perfect tool for marketers, a user-generated paradise that is now run by devils.

Will you delete Facebook? Probably not. Will I? I’m working on it. I’ve already been deleting old tweets after realizing that border police and potential employers may use what I write publicly against me. I’m clearing out old social media accounts and, as I mentioned before, deleting old Facebook posts, thus ensuring that I will no longer be a target for companies like Cambridge Analytica. But we love our social media, don’t we? The power it affords. The feeling of connection. In the absence of human interaction we cling to whatever dark simulacrum is available. In the absence of the Town Square we talk to ourselves. In the absence of love and understanding we join the slow riot of online indifference.

When Travis Kalanick led his ride-sharing company down the dark path to paranoia, bro culture, and classist rantings we reacted by deleting the app. We didn’t want to do business with that particular brand of company. Yet we sit idly by while Facebook sells us out and its management pummels and destroys all competition.

I wish it didn’t have to be this way. There is plenty of good in these platforms but the dangers far outweigh the benefits. Try to recall the last time you were thankful for social media. I can. It happened twice. First, it happened when I posted on my “wall” a eulogy for my father who died in January. The outpouring of support was heartening in a dark time. It was wonderful to see friends and acquaintances tell me their own stories, thereby taking the sting out of my own. But months later that good feeling is gone, replaced by ads for fancy shoes and political rants. Out of the Facebook swamp sometimes surfaces a pearl. But it sinks just as quickly.

One more sad example: I found out, accidentally, that my friend’s wife died. It appeared on my feed as if placed there by some divine hand and I was thankful it surfaced. It beat out videos of Mister Rogers saying inspiring things and goofy pictures of Trump. It beat out ads and rants and questions about the best sushi restaurant in Scranton. The stark announcement left me crying and breathless. There it was in black and blue, splashed across her page: she was gone. There was the smiling photo of her two little children and there was the outpouring of grief under these once innocuous photos. Gone, it said. She was gone. I found out from her wall where her memorial service would be held and I finally reached back out to my old friend to try to comfort him in his grief. Facebook, in those two instances, worked.

But Facebook isn’t the only thing that can give us that feeling of connectedness. We’ve had it for centuries.

Facebook simply replaced the tools we once used to tell the world of our joys and sorrows and it replaced them with cheap knock-offs that make us less connected, not more. Decades ago, on one coal-fogged winter morning in Krakow, Poland where I was living, I passed Kościół św. Wojciecha with its collection of nekrologi – necrologies – posted on a board in front of the church. There you saw the names of the dead – and sometimes the names of the newly born – and it was there you discovered what was happening in your little corner of the world. The church wasn’t far from the central square – the Rynek – and I walked there thinking about the endless parade of humanity that had walked across those cobbles, stopping for a moment in their hustle at the church yard to see who had died. I stood in the crisp air, flanked by centuries old brickwork, and imagined who once populated this place. This was the place you met your friends and your future partners. It was there you celebrated your successes and mourned your failures. It was there, among other humans, you told the world the story of your life, but told it slant. You witnessed kindnesses and cruelties, you built a world entire based on the happenings in a few square miles.


No more. Or, at least, those places are no longer available to most of us.

We’ve moved past the superstitions and mythologies of the past. Tools like Facebook were designed to connect us to the world, giving us an almost angelic view of daily happenstance. We replaced the churchyard with the “timeline.” But our efforts failed. We are still as closed, still full of superstition, as we were a hundred years ago. We traded a market square for the Internet but all of the closed-mindedness and cynicism came with it. We still disparage the outsider, we still rant against invisible enemies, and we still keep our friends close and fear what lies beyond our door. Only now we have the whole world on which to reflect our terror.

It doesn’t have to be this way. Maybe some day we’ll get the tools we need to interact with the world. Maybe they’re already here and we just don’t want to use them.

Until we find them, however, it’s probably better for us to delete the ones we use today.


from: https://techcrunch.com/2018/03/19/deletefacebook/

Reference: https://en.wikipedia.org/wiki/Donald_Trump_presidential_campaign,_2016





Chrome Plug-In




Batch delete posts in Fackbook (TM) timeline. Other batch processing: privacy / hide / unhide / unlike items. FREE!
>>> Batch delete Social Book posts/items
>>>                 hide/unhide, unlike
>>> Support All Languages on Social Book

The extension is free to use for everyone.
I don't know why there are so many cloned
reviews on top. I feel really awful for that.
Please DO NOT duplicate reviews!

* Facebook (TM) is a registered trade mark of Facebook. The author of this extension is by no mean associated or affiliated with Facebook. This extension uses absolutely NO any Facebook APIs, and it's therefore not binding with any Facebook APIs licenses. All the occurrences of the word "Facebook" is for descriptive purpose only, and hence it's legally allowed.

"Social Book Post Manager" helps you to delete your posts through the activity log, which include posts by you, and by other persons/apps. You may specify "Year", "Month", "Text Contains", and "Text Not Contains" filters for posts to delete. Plus the activity log filters provided by Facebook (TM), you have full control of which posts to delete, and which posts to keep. Recent new features:

1) Text filters support AND/OR conditions.
2) Prescan the activity log. You can then select exactly which individual entries that you want to delete/hide/unhide/unlike/change privacy settings.
3) Hide/unhide timeline items.
4) Unlike items.
5) Change privacy settings.

ALL the features are FREE for you to use, and totally UNLIMITED. If you are satisfied, please leave me some feedback. Also please feel free to give me suggestion, and bug reports if any.

>>> The process is slow. It simulates your mouse click on delete button one-by-one. This is the intentional limitation by Facebook (TM). There is no way to bypass it. <<<

>>> Any other Chrome extension may conflict with this extension. If it's not working, please follow the instruction to remove/disable other Chrome extensions temporarily. Then restart Chrome and try again. <<<


Notes: The scanning/deletion process takes a LONG time to finish, depending on number of posts involved. Because Facebook (TM) does not want the users to easily remove posts, they don't provide any function/API to delete multiple posts at a time. This extension can only filter/delete posts one-by-one. There is no any known method to accelerate this process. Please be patient, sit back, and let the extension does its job. You may have a coffee, or better just let it run through a night.


Instructions (with "Prescan" option checked):

1. Login to Facebook (TM), go to the Activity Log. Use activity filters to select a subset of posts to delete.
2. Click the extension's button to open the interface. 
3. If needed, choose "Year", "Month", "Text Contains" and "Text Not Contains" for posts that you want to delete.
4. Click the "Delete Post" button. The extension will scan through your Activity Log and mark all posts matching the conditions.
5. The prescan process may take a long time, depending on number of posts to be deleted.
6. After the prescan process finishes, there will be a Confirm button shown on top of the Facebook (TM) page. You may verify and uncheck certain posts as desired, then continue to actually delete the posts.
7. When done, the extension reports total number of posts deleted.


Instructions (with "Prescan" option unchecked):

1. Login to Facebook (TM), go to the Activity Log. Use activity filters to select a subset of posts to delete.
2. Click the extension's button to open the interface. 
3. If needed, choose "Year", "Month", "Text Contains" and "Text Not Contains" for posts that you want to delete.
4. Click the "Delete Post" button. The extension will scan through your Activity Log and delete all posts matching the conditions.
5. The deletion process may take a long time, depending on number of posts to be deleted.
6. Deleted post counter is displayed in real-time. You may stop the deletion process at any time by clicking the "Stop Now" button.
7. When done, the extension reports total number of posts deleted.



Adrian Lamo, The Hacker Who Turned in Chelsea Manning, Dead at 37

Adrian Lamo (February 20, 1981 – March 14, 2018)

Adrian Lamo, “the homeless hacker” a.k.a. ‘Protagonist’, ‘Bitter Geek’, ‘AmINotMerciful’, ‘Unperceived’, ‘Mythos’, ‘Arcane’, ‘truefaith’, ‘FugitiveGame’ – who found shelter most evenings in abandoned buildings or on friend’s couches, was a study in contradictions. Diagnosed with Asperger Syndrome, being one of the founders of PlanetOut.com, and using an old Toshiba laptop that was missing seven keys, his method was the same in every case: find security holes; offer to fix them; refuse payment in exchange for help; wait until hole is patched; alert the media.

By Brian Krebs

This entry was posted on Sunday, March 18th, 2018 at 11:53 pm and is filed under A Little Sunshine

Adrian Lamo, the hacker probably best known for breaking into The New York Times‘s network and for reporting Chelsea Manning‘s theft of classified documents to the FBI, was found dead in a Kansas apartment on Wednesday. Lamo was widely reviled and criticized for turning in Manning, but that chapter of his life eclipsed the profile of a complex individual who taught me quite a bit about security over the years.

I first met Lamo in 2001 when I was a correspondent for Newsbytes.com, a now-defunct tech publication that was owned by The Washington Post at the time. A mutual friend introduced us over AOL Instant Messenger, explaining that Lamo had worked out a simple method allowing him to waltz into the networks of some of the world’s largest media companies using nothing more than a Web browser.

The panoply of alternate nicknames he used on instant messenger in those days shed light on a personality not easily grasped: Protagonist, Bitter Geek, AmINotMerciful, Unperceived, Mythos, Arcane, truefaith, FugitiveGame.

In this, as in so many other ways, Lamo was a study in contradictions: Unlike most other hackers who break into online networks without permission, he didn’t try to hide behind the anonymity of screen names or Internet relay chat networks.

By the time I met him, Adrian had already earned the nickname “the homeless hacker” because he had no fixed address, and found shelter most evenings in abandoned buildings or on friend’s couches. He launched the bulk of his missions from Internet cafes or through the nearest available dial-up connections, using an old Toshiba laptop that was missing seven keys. His method was the same in every case: find security holes; offer to fix them; refuse payment in exchange for help; wait until hole is patched; alert the media.

Lamo had previously hacked into the likes of AOL Time Warner, ComcastMCI Worldcom, Microsoft, SBC Communications and Yahoo after discovering that these companies had enabled remote access to their internal networks via Web proxies, a kind of security by obscurity that allowed anyone who knew the proxy’s Internet address and port number to browse internal shares and other network resources of the affected companies.

By 2002, Lamo had taken to calling me on the phone frequently to relate his various exploits, often spoofing his phone number to make it look like the call had come from someplace ominous or important, such as The White House or the FBI. At the time, I wasn’t actively taking any measures to encrypt my online communications, or to suggest that my various sources do likewise. After a few weeks of almost daily phone conversations with Lamo, however, it became abundantly clear that this had been a major oversight.

In February 2002, Lamo told me that he’d found an open proxy on the network of The New York Times that allowed him to browse the newsroom’s corporate intranet. A few days after that conversation, Lamo turned up at Washingtonpost.com’s newsroom (then in Arlington, Va.). Just around the corner was a Kinkos, and Adrian insisted that I follow him to the location so he could get online and show me his discovery firsthand.

While inside the Times’ intranet, he downloaded a copy of the Times’ source list, which included phone numbers and contact information for such household names as Yogi Berra, Warren Beatty, and Robert Redford, as well as high-profile political figures – including Palestinian leader Yassir Arafat and Secretary of State Colin Powell. Lamo also added his own contact information to the file. My exclusive story in Newsbytes about the Times hack was soon picked up by other news outlets.

In August 2003, federal prosecutors issued an arrest warrant for Lamo in connection with the New York Times hack, among other intrusions. The next month, The Washington Post’s attorneys received a letter from the FBI urging them not to destroy any correspondence I might have had with Lamo, and warning that my notes may be subpoenaed.

In response, the Post opted to take my desktop computer at work and place it in storage. We also received a letter from the FBI requesting an interview (that request was summarily denied). In October 2003, the Associated Press ran a story saying the FBI didn’t follow proper procedures when it notified reporters that their notes concerning Lamo might be subpoenaed (the DOJ’s policy was to seek materials from reporters only after all other investigative steps had been exhausted, and then only as a last resort).

In 2004, Lamo pleaded guilty to one felony count of computer crimes against the Times, as well as LexisNexis and Microsoft. He was sentenced to six month’s detention and two years probation, an ordered to pay $65,000 in restitution.

Several months later while attending a formal National Press Foundation dinner at the Washington Hilton, my bulky Palm Treo buzzed in my suit coat pocket, signaling a new incoming email message. The missive was blank save for an unusually large attachment. Normally, I would have ignored such messages as spam, but this one came from a vaguely familiar address: adrian.lamo@us.army.mil. Years before, Lamo had told me he’d devised a method for minting his own .mil email addresses.

The attachment turned out to be the Times’ newsroom source list. The idea of possessing such information was at once overwhelming and terrifying, and for the rest of the evening I felt certain that someone was going to find me out (it didn’t help that I was seated adjacent to a table full of NYT reporters and editors). It was difficult not to stare at the source list and wonder at the possibilities. But ultimately, I decided the right thing to do was to simply delete the email and destroy the file.


Lamo was born in 1981 outside of Boston, Mass. into an educated, bilingual family. Lamo’s parents say from an early age he exhibited an affinity for computers and complex problem solving. In grade school, Lamo cut his teeth on a Commodore64, but his parents soon bought him a more powerful IBM PC when they grasped the extent of his talents.

“Ever since he was very young he has shown a tendency to be a lateral thinker, and any problem you put in front of him with a computer he could solve almost immediately,” Lamo’s mother Mary said in an interview in 2003. “He has a gifted analytical mind and a natural curiosity.”

By the time he got to high school, Lamo had graduated to a laptop computer. During a computer class his junior year, Lamo upstaged his teacher by solving a computer problem the instructor insisted was insurmountable. After an altercation with the teacher, he was expelled. Not long after that incident, Lamo earned his high school equivalency degree and left home for a life on his own.

For many years after that he lived a vagabond’s existence, traveling almost exclusively on foot or by Greyhound bus, favoring the affordable bus line for being the “only remaining form of mass transit that offers some kind of anonymity.” When he wasn’t staying with friends, he passed the night in abandoned buildings or under the stars.

In 1995, Lamo landed contract work at a promising technology upstart called America Online, working on “PlanetOut.com,” an online forum that catered to the gay and lesbian community. At the time, advertisers paid AOL based on the amount of time visitors spent on the site, and Lamo’s job was to keep people glued to the page, chatting them up for hours at a time.

Ira Wing, a security expert at one of the nation’s largest Internet service providers, met Lamo that year at PlanetOut and the two became fast friends. It wasn’t long before he joined in one of Lamo’s favorite distractions, one that would turn out to be an eerie offshoot of the young hacker’s online proclivities: exploring the labyrinth of California’s underground sewage networks and abandoned mines.

Since then, Lamo kept in touch intermittently, popping in and out of Wing’s life at odd intervals. But Wing proved a trustworthy and loyal friend, and Lamo soon granted him power of attorney over his affairs should he run into legal trouble.

In 2002, Wing registered the domain “freeadrian.com,” as a joke. He’d later remark on how prescient a decision that had been.

“Adrian is like a fast moving object that has a heavy affect on anyone’s life he encounters,” Wing told this reporter in 2003. “And then he moves on.”


In 2010, Lamo was contacted via instant message by Chelsea Manning, a transgender Army private who was then known as Bradley Manning. The Army private confided that she’d leaked a classified video of a helicopter attack in Baghdad that killed 12 people (including two Reuters employees) to Wikileaks. Manning also admitted to handing Wikileaks some 260,000 classified diplomatic cables.

Lamo reported the theft to the FBI. In explaining his decision, Lamo told news publications that he was worried the classified data leak could endanger lives.

“He was just grabbing information from where he could get it and trying to leak it,” Mr. Lamo told The Times in 2010.

Manning was later convicted of leaking more than 700,000 government records, and received a 35 year prison sentence. In January 2017, President Barack Obama commuted Manning’s sentence after she’d served seven years of it. In January 2018, Manning filed to run for a Senate seat in Maryland.


The same month he reported Manning to the feds, Lamo told Wired.com that he’d been diagnosed with Asperger Syndrome after being briefly hospitalized in a psychiatric ward. Lamo told Wired that he suspected someone had stolen his backpack, and that paramedics were called when the police responding to reports of the alleged theft observed him acting erratically and perhaps slurring his speech.

Wired later updated the story to note that Lamo’s father had reported him to the Sacramento Sherriff’s office, saying he was worried that his son was over-medicating himself with prescription drugs.

In 2011, Lamo told news outlet Al Jazeera that he was in hiding because he was getting death threats for betraying Manning’s confidence and turning him in to the authorities. In 2013, he told The Guardian that he’d struggled with substance abuse “for a while.”

It’s not yet certain what led to Lamo’s demise. He was found dead in a Wichita apartment on March 14. According to The Wichita Eagle, Lamo had lived in the area for more than a year. The paper quoted local resident Lorraine Murphy, who described herself as a colleague and friend of Lamo’s. When Murphy sent him a message in December 2016 asking him what he was up to, he reportedly replied “homeless in Wichita.”

“Adrian was always homeless or on the verge of it,” Murphy is quoted as saying. “He bounced around a great deal, for no particular reason. He was a believer in the Geographic Cure. Whatever goes wrong in your life, moving will make it better. And he knew people all over the country.”

The Eagle reports that Wichita police found no signs of foul play or anything suspicious about Lamo’s death. A toxicology test was ordered but the results won’t be available for several weeks.

from: https://krebsonsecurity.com/2018/03/adrian-lamo-homeless-hacker-who-turned-in-chelsea-manning-dead-at-37/

Reference: https://en.wikipedia.org/wiki/Adrian_Lamo


Lamo – Mitnick – Poulsen

Group photo of Adrian Lamo, Kevin Mitnick, and Kevin Lee Poulsen circa 2001

Kevin David Mitnick (born August 6, 1963) is an American computer security consultant, author and hacker, best known for his high-profile 1995 arrest and later five years in prison for various computer and communications-related crimes.

Kevin Lee Poulsen (born November 30, 1965) is an American former black-hat hacker
and a current editor at Wired.




Blockchain Based Solution to Firearms Registration

Though this piece promotes (by iHLS) one specific company (Blocksource) and their product (i.e. shameless promotion of an accelerator and one of its participants), and is not actually Blockchain Technology, the idea behind it could very well be a good use of Blockchain technology.


This post is also available in: heעברית (Hebrew)


The recent wave of deadly terrorist attacks in Europe, particularly in Paris, has raised questions regarding illegal firearms, such as deactivated Kalashnikov rifles, that anyone can purchase even in developed countries like France. Criminals and terrorists use deactivated firearms – they purchase the weapons, activate them and sell them on the black market. At least 25% of the 80 million weapons in Europe are, in fact, illegal, with Brussels being one of the leading foci of illegal weapons.

At the same time, the European security agencies surprisingly do not share enough information with each other, and there is an acute lack of data security. Due to a new EU regulation (Directive 2017/853), member states are required to establish a digital firearms registry. An effective registry must include all (or most) member states to encompass cross-border transactions, it cannot be achieved on a domestic level alone. At the same time, no single member state is fully trusted by the other states to run the registry.

The cybersecurity startup Blocksource offers an innovative Blockchain-based solution to this acute problem. Blocksource is the first startup that arrived in Israel from abroad to take part in the iHLS Security Accelerator.

Blocksource is dedicated to the development of decentralized data sharing solutions, which allow entities to share information effectively, privately and securely. They provide organizations with a layer of defense against malicious cyber attacks that would result in undetectable data manipulation. Sharing of data can be secured in both an intra- and an inter-organizational setting.

Their first project solves the issue of firearms ownership tracking within the EU through the establishment of a sophisticated registry that would enable to trace back the sources of weapon transaction. The solution is a Blockchain-based, decentralized database system jointly operated by all network participants.

The decentralized firearms registry will eventually be jointly operated by all parties involved in the life cycle of firearms – police stations, agencies, manufacturers, retailers, and deactivators. This will solve the problem of dispersed databases which hardly communicate so that local law enforcement agencies would be able to trace the supply chain of seized firearms even if they were purchased in another state.

Furthermore, the technology can provide member states with efficient monitoring of firearms licenses by requiring that governmental entities authorize any transfer of firearms ownership.

The technology ensures that changes of the shared data are immediately visible to any predefined permissioned participant in the network. The solution leverages Blockchain technology and represents the next evolution of Blockchain technology by tracking digitized assets that exist in the real world instead of virtual cryptocurrencies.

The emphasis on the privacy of information is one of Blocksource technology’s major advantages. While typical Blockchain solutions cannot guarantee confidentiality and privacy, the startup’s products deploy advanced cryptographic methods, such as zero-knowledge proofs, off-chain state channels and cryptographic address deriving schemes.

This groundbreaking technology can also be applied in many fields other than security, such as securely tracing assets in a logistic supply chain, solving the problem of information gaps between the various “links” in the chain – forwarders, end users etc., enabling the transaction of documents and information securely and privately. The sharing of data in the VC field is another potential sphere of activity.

Blocksource’s three entrepreneurs represent the combination of several fields of expertise. CEO Neal Swaelens, with BSc. Banking & Finance from Frankfurt School of Finance & Management with a background in Blockchain, M&A, is an experienced startup entrepreneur from Belgium and the winner of a 2015 Company of the Year award for his startup Tapalo.

CTO Sebastian Stammler is a recognized academic expert on Blockchain and privacy, a member of a German round table on Blockchain and a Ph.D. researcher with a background in Mathematics. He has further work experience in quantitative finance and software development.

COO Roee Sarel is an expert on the economics of crime, a Ph.D. researcher in Law & Economics, a lecturer at the Frankfurt School of Finance & Management and an Israeli lawyer with a background in law and business (LL.B & MBA).

“We hope that governments and law enforcement authorities would use our unique technology to share secure information,” says Swaelens. At a later stage, he envisions the weapon shops would follow suit. The company expects to establish an all-European registry system that will contribute to a shift in security and law enforcement, setting up a new industry standard.


from: https://i-hls.com/archives/81992

refernece: https://www.blocksource.io/




A Roadside Bomb (IED) Simulator: US Army Replacing 1980s War Games Simulators With Gaming Tech – In Early 2020

A roadside bomb (IED) simulator.


The first phase of the Synthetic Training Environment initiative replaces existing simulators for vehicles. The second phase aims to create — in just two years — something the US Army never had before: an “immersive” virtual training environment for troops on foot.

on March 16, 2018


Lockheed Wins Contract To Maintain 100+ Legacy Training Systems After years of toying with the technology, the US Army is now racing to replace its clunky 1980s and ’90s-vintage training simulators with virtual reality, massive multiplayer networks and other innovations straight from the commercial gaming industry.

On March 5th, seven vendors started demonstrating new options for aircraft and ground vehicle simulators to Army experts, the two-star head of the effort says in an interview. A field test with combat troops is tentatively scheduled for July or August, Maj. Gen. Maria Gervais tells me. By fiscal year 2019 or “early” 2020, she says, the Army will have done enough testing to decide whether or not to buy the new simulators in bulk.


Maj. Gen. Maria Gervais


That’s a meteoric pace for any government IT procurement, let alone one by the Army, the largest and arguably most bureaucratic of the services. But that’s not all. While the first phase of the Synthetic Training Environment initiative replaces existing simulators for vehicles, the second phase aims to create — in just two years — something the Army’s never had before: an “immersive” virtual training environment for troops on foot.

Unlike vehicle crews, infantry can’t train by sitting in a box and interacting with screens. Close combat is intensely physical. Foot troops need to crawl, run, climb, and dive for cover, interacting with the real world and each other. So instead of virtual reality, the infantry squad trainer will use augmented reality: some kind of Google Glass-like heads-up display that lets the soldiers still see the physical environment all around them, but which can also superimpose digital obstacles, civilians, and even adversaries on the real terrain. (One defense official compared it to a militarized Pokémon Go).

Combining virtual reality and real reality this way lets troops train in a variety of scenarios no purely physical site could match. “If you have a MOUT (urban warfare) site at Fort Benning, you can go into it one time and you could be in Afghanistan,” Gervais says. “The next time you could be in Iraq.”


Army marksmanship simulator.
Such systems can train specific tasks –
but not the highly physical skill set required for infantry combat.


High Priority

The infantry squad trainer in particular exists at the intersection of two powerful leaders’ agendas. Last October, Army Chief of Staff Gen. Mark Milley, created eight Cross Functional Teams to jumpstart long-stalled modernization. It brings together expertise historically scattered across the service in order. In parallel, this February, Defense Secretary Jim Mattis launched a Close Combat Lethality Task Force to improve the entire US infantry, with a particular focus on training simulations.


Defense Secretary Jim Mattis


So while Maj. Gen. Gervais runs the Army’s Synthetic Training Environment CFT, she’s working closely with the service’s Soldier Lethality CFT, the SecDef’s Close Combat taskforce, the Marine Corps, and Special Operations Command. She’s also responsible for training simulators for every tank, aircraft and other system the other seven CFTs come up with. She keeps in close touch with them all through weekly video teleconferences.

Day to day, Gervais has US Army special forces veterans and Marines on her team, along with conventional Army infantrymen, acquisition experts, software engineers, a representative of military intelligence, and (this being America) lawyers. That’s expertise that historically would have been scattered across multiple organizations and locations, cooperating in written exchanges that drag on for months or years.

With the Cross Functional Team approach, by contrast, “I get all the folks in the room,” Gervais says. “You have the discussions right then and there and are solving the problem.”


US Army helicopter simulator


The Problem

Maj. Gen. Gervais and her team have a tough problem to solve, because the US Army’s current mix of simulators is a mess. During the “training revolution” of the 1980s, the Army enthusiastically embraced what was then state of the art. Many of those simulators are still around, more than 30 years and 20 Moore’s Law cycles later.

The scale and complexity is staggering. When Lockheed Martin announced today it had won a seven-year, $3.53 billion contract to sustain the Army’s existing training equipment, Lockheed exec Amy Gowder estimated they’d be responsible for 100 different major training systems, each with its unique and often incompatible software and spare parts. There are over 300,000 individual inventory items, from full-up simulators to protective gear for firing ranges, and 21 million piece parts.

Why so many? In most cases, each Army program — each vehicle, aircraft, and weapon — developed its own simulator to train troops on its own product, without much thought to how they might interact. But in actual battle, the tank crew or pilot that goes it alone is probably going to die. Realistic training requires working with other vehicles, and not just other vehicles of the same type, but with many types.

The current simulators aren’t set up for that. Getting them to work together requires cumbersome kludges, like deliberately slowing down the aviation simulator so it can work with the tank and Bradley simulators. Some aspects of combat just can’t be portrayed at all. The command and control software the Army uses to direct real battles, for example, generally doesn’t work in the virtual world of its simulators.

Then there’s the nightmare of terrain. When, increasingly anxious about Russian airpower, the Army tried to train its helicopter crews and anti-aircraft gunners together, it turned out they weren’t in the same virtual world. On their screens, the pilots were flying nap of the earth and taking cover behind hills, the standard tactic. On anti-aircraft gunners’ screens, the hills didn’t exist and the helicopters were sitting ducks.


US Army Reserve troops train to avoid and clear roadside bombs.


All told, “we have 57 terrain formats that we’re trying to line up all the time,” Gervais says. So the new simulators new being demonstrated will use a single common standard, called One World Terrain: “With One World Terrain we believe we can quickly go down to 28 formats.” How long to get it down to one common terrain format for the entire Army? “Give me about a year and a half,” she offers.

It’s not just the virtual terrain that needs improvement. There have to be people in this brave new world as well. When troops train on most current simulators, “there’s no dogs, there’s no horse carts, there’s no people, so it’s a very sterile environment,” Gervais says. “That’s not what it looks like (in, say, Afghanistan). It’s very congested, you’ve got donkeys running across the road.”

As the US Army focuses increasingly on urban warfare, the simulation problem becomes even more complex. “A megacity — we can create the buildings, but we’ve got to go in the buildings, we’ve got to go under the buildings, and we want that city to be alive,” she says, full of civilians and vehicles going about their day. “We’re working on the patterns of life and we’re starting to make a lot of progress.”

How much will all this cost? “We don’t have, since we just got started, a really solid estimate, but… I don’t think (it’s) approaching the billion (dollar level),” Gervais says. “As we’re working with the commercial virtual and gaming industry, those costs are going down.”

“I’ll have a much better cost estimate in the next year or two,” she says, “after I get done with these technology demonstrations and experiments.”


from: https://breakingdefense.com/2018/03/war-games-army-replacing-1980s-simulators-with-gaming-tech/





USA Intelligence official: USA still leads in quantum computing … for now

Recent comments from the intelligence community indicate quantum computing could be one of the most important technology investments being made today, but the field has many worried about what might happen if others achieve it first.

Quantum encryption, enabled by quantum computing, could have profound national security implications such as making adversary messages unreadable and enabling others to easily decrypt critical protected intel.

The United States, for now, remains the leader, according to one U.S. intelligence official.

“This is an area in which the U.S. has a lead in part because of a large investment by the federal government in quantum computing research,” Jason Matheny, director of the Intelligence Advanced Research Projects Activity, said during a Defense Writers Group breakfast March 14. “China is investing but not at the levels that the United States has.”

Matheny said he thinks the community is still 20-plus years away from a quantum computer that’s relevant to encryption as quantum physics is a very challenging field.

Lt. Gen. Robert Ashley, the director of the Defense Intelligence Agency, used prepared congressional testimony earlier this year to describe a challenging landscape if adversaries get to quantum computing before the United States.


“Adversaries are giving priority to researching quantum-enabled communications and quantum computing, which could supply the means to field highly secure communication systems and eventually to break certain encryption algorithms,” his testimony read. “The challenge for predicting the next emerging and disruptive technology for the future is anticipating the follow-on effects of seemingly innocuous technologies that are evolving today.”


However, Matheny said there still needs to be more research in quantum-resistant encryption.

“One of the reasons that we invest is so that we understand where the state of the art is, understand a bit about what the timelines are so that we can deploy quantum-resistant encryption when it’s critical,” he said. “But given that we want to protect classified data for 20-plus years typically by policy, it does mean that we should probably be pursuing quantum-resisting encryption with a lot of energy.”


But what about artificial intelligence?

In a related endeavor, many have sounded the alarm about the growing global race in artificial intelligence and machine learning.

Like quantum computing, investments by peer competitors in AI have many government leaders worried that the U.S. advantage could erode.

Matheny provided a more measured tone.

“I think the United States does have a healthy lead,” Matheny said, adding he’s not a “catastrophist” when looking at the race against China, who he said is the strongest global competitor.

The Chinese have been very thoughtful about how to pursue research on machine learning and AI, he acknowledged, basically translating the U.S. AI development plan. “They’ve also introduced an implementation plan for that AI strategy that includes quantitative milestones and speech recognition and imagery analysis and video analysis that are, I think, realistic but also ambitious,” he said.

However, most of the fundamental breakthroughs in machine learning in the last few years have been due to U.S.-funded research, Matheny added, noting the U.S. has the strongest universities in machine-learning research, the strongest start-up culture, the most risk-tolerant companies willing to fund high-risk research that might not pay out for five to 10 years, and the most risk-tolerant federal funders.


from: https://www.c4isrnet.com/intel-geoint/2018/03/15/intelligence-official-us-still-leads-in-quantum-computing-for-now/






Sweden’s plan to deter a Russian digital attack

WASHINGTON — As Sweden seeks to revitalize its “total defense” concept, it will rely heavily on its private technology industry to develop new protections from cyberattacks.

The blueprint, which would see the entirety of Sweden activated to repel an invasion, was laid out by Defence Commission head Bjorn von Sydow and commission secretariat chief Tommy Akesson during a February interview with Defense News.

While Sweden had plans throughout the Cold War to militarize the nation in case of an attack, government officials let those plans expire as the country’s relationship with Russia changed. That means leaders have a system to build on as they develop a new strategy.

But there is a vital area the old plans never had to account for: cyberwarfare, which is expected to become an early focus for the commission.

“The cyber challenges were not known 25 years ago. They were even less taken care of by the system. So we need time,” von Sydow said.

He acknowledged it’s logical to assume digital strikes against infrastructure and the power grid would be the first move in any aggression by a great power. That could be particularly crippling to the civilian population if any attack came in winter. Imagine going days without heat or electricity with temperatures well below freezing.

Sweden doesn’t have to look far to see what damage could come from a digital-first strike.


  • Estonia, located a short distance away, was infamously hit with a major cyberattack that crippled the government in 2007.
  • In 2015, Ukraine’s power grid was shut down via cyberattack; since then, other utilities have been taken offline.


In both cases, analysts believe Russia was behind the attacks, and hence they can be seen as a preview of the kinds of activities that could come at the start of a Russian military action.

“Sweden is like the U.S., a tremendously digitalized country,” von Sydow said. “In some areas, you would probably not be able to open a door without the digital performance. If electricity is out, or at least out and in, it would probably [have] tremendous effects.”

Magnus Nordenman of the Atlantic Council agrees Sweden is vulnerable to this kind of assault, as it is one of the most wired-in countries in the world. “On a good day, it’s very efficient,” Nordenman said. “But also, potentially, it’s exposed in a crisis.”

As a result, Akesson said, the commission is working with the government to craft new regulations for cybersecurity in private sector companies, as well as driving toward greater investment in military cyber capabilities.

“We see it more or less as a military instrument, and we will see how much we invest. Generally speaking, we are quite good at cyber things in Sweden. We have a lot of companies and engineers and people thinking about those issues,” Akesson noted.

Sweden is home to a vibrant technology scene. Music-streaming giant Spotify is based out of Stockholm, as is the business software company Wrapp. Outside of the city, Facebook selected the coastal town of Luleå as home to its its first data center outside the United States.

In Western countries, it can sometimes be a challenge to align the private sector with the government. Generally, the tech community has resisted top-down orders from the government, and famously avoids working on some projects because of stringent regulations and intellectual property requirements.

But Erik Brattberg with the Carnegie Endowment for International Peace suspects the domestic cyber industry won’t raise objections to working on new security standards or assisting the government with emergency preparations.


“In Sweden, there is a high trust of the government,” he said. “I would think companies are happy to try and play their role, as well. They recognize it is of their interest to be helpful, ultimately.”


To get a sense how cyber total defense might work, Sweden simply has to look next door to Finland, which never drew down its total defense plan and has worked to integrate cyber capabilities into its strategy.


“Not too long ago I read the research that Finnish networks are the best-protected in the world, and that’s [not only] because of what defense is doing, but because we have a very good level of that industry in Finland,” Finnish Defence Policy Director-General Janne Kuusela told Defense News during a recent visit to Washington.


“The interaction is already there. And we benefit a lot from having people who worked in this domain, in their civilian lives, so they are reservists and bring a lot of additional knowledge and interaction for the defense goals for us. It’s a good way of dealing with this.”

Whether Sweden can find the level of cyber resiliency it needs remains to be seen, but the Defence Commission intends to ensure resources are available. The current plan suggests that between 2021 and 2025, Sweden will need to invest 4.2 billion krona (U.S. $510.5 million) per year on its total defense proposals.


from: https://www.c4isrnet.com/international/2018/03/14/swedens-plan-to-deter-a-russian-digital-attack/





Top Gun For Grunts: Mattis May Revolutionize Infantry – “Intelligent Soldiers Are Far More Effective And Far Less Likely To Become Casualties”

A soldier holds a PD-100 mini-drone during the PACMAN-I experiment in Hawaii.

“To get a quantum increase in the quality of close combat forces, we can do it in the next two years, (and) the cost compared to the rest of the DoD budget is very small,” said retired Maj. Gen. Robert Scales, who chairs the advisory board for Secretary Mattis’s Close Combat Lethality Task Foce.

on March 13, 2018 at 2:49 PM


WASHINGTON: Forget the old-school grunt. Imagine a future American infantryman trained as intensively as a fighter pilot through hundreds of virtual and real-world drills, culminating in a “small unit Top Gun.” Imagine infantry going into battle with swarms of drones serving as scouts and fire support. Imagine Army and Marine infantry exempted from the Pentagon’s bureaucratic personnel policies so they can build teams of experienced soldiers in their late 20s and early 30s, much like Special Forces.

“Special Forces, Marine infantry, Army infantry…. these are the forces that are overused, overextended and the most likely to die,” said Bob Scales, a leading advisor on infantry to Defense Secretary Jim Mattis. “There’s no one administration or one service one can fault for this, because this has been a nagging problem…. since World War II.”

But it doesn’t have to stay that way, Scales told me: “To get a quantum increase in the quality of close combat forces, we can do it in the next two years, (and) the cost compared to the rest of the DoD budget is very small.”



Advisor To Mattis

Scales spoke to me Monday about his new position chairing the advisory board for Mattis’s newly created Close Combat Lethality Task Force.

“My relationship with Secretary Mattis….. goes back almost 14 years on this subject,” Scales told me. Mattis, a retired Marine four-star, and Scales, a retired Army two-star, worked together when Mattis commanded first Marine Corps Combat Development Command (MCCDC) and then the since-disbanded Joint Forces Command (JFCOM).

Scales is both a military historian and a futurist, as well as a passionate advocate for the often-neglected “poor bloody infantry.” He played a leading role in the Army After Next wargames of the 1990s, which pioneered new concepts in the military use of drones and networks, including ideas now being revived as part of the Army’s Multi-Domain Battle concept. He dedicated his latest book to Mattis and has spoken about it to enthusiastic audiences in the Marine Corps — although, ironically, not from his own service.

In private conversations, “the rank and file and the leaders in the Army I’m talking to are extremely enthusiastic,” Scales said, but as for public presentations, “the first time I’m actually talking to an Army audience is on the 21st at AUSA.”

That’s when the Association of the US Army will host Scales and DoD personnel undersecretary Robert Wilkie, who’s overseeing Mattis’ task force. That Mattis has given the lead to “P&R,” rather than to an acquisition or technology official, suggests how much the task force will emphasize training, personnel, and policy over new equipment.


A young Marine reaches out for a hand-launched drone.



That said, Pentagon budget plans include about $1.2 billion in new investments stemming from the effort’s first phase, a Cost Assessment & Program Evaluation (CAPE) study that Scales said focused primarily on material. So what do the infantry really need?

“I think the Number One investment is in sensors and robots,” Scales told me. The great technological revolution of our time is not in armored vehicles, warships, or even jet aircraft, he argues, whose performance is improving on the margins rather than by great leaps. The real revolution is in electronics, especially the continual miniaturization of computing power, which means capabilities that once required a large dedicated platform — a vehicle, a ship, or an aircraft — can now fit in packages small enough for the infantry to carry.

Shoulder-launched anti-tank guided missiles were killing tanks and jets as far back as 1973 (although active protection may change that). Today, Scales said, the breakthrough lies in small unmanned systems — miniature drones in the near term, but ground robots further out — that can carry sensors and weapons for the infantry. The ideal, he said, is systems sufficiently small and cheap that they’re “disposable,” with troops treating them as munitions to be expended rather than as assets to be husbanded.

Both the Army and Marine Corps have experimented with such technologies. Marine Commandant Bob Neller — who noted Scales’s influence at a recent NDIA breakfast— has promised every Marine infantry squad will have its own drone and its own unmanned/electronic systems specialist.

So far, though, the official experiments have focused on robotic Intelligence, Surveillance, & Reconnaissance (ISR), not strikes. Scales, however, was one of the first military futurists to propose armed drones, back during the Army After Next wargames. Whereas large Predators and Reapers strike high-value targets on the orders of senior commanders today, Scales envisions a near future in which every squad leader can order strikes with armed mini-drones.


“This turgid firepower system (we use today) can be replaced by stealthy orbiting drones and a squad leader with a fire support app,” Scales told me. “He presses the button and the target disappears in 20 seconds.”


Army Capt. Marcus Long geared up to use the Dismounted Soldier Training System (DSTS).



Technology doesn’t just apply on the battlefield, however. It can save lives long before the first shot is first by changing how the infantry train.

Why, asks Scales, do we invest millions in training a fighter pilot, but not in training the infantry troops whose lives are at much greater risk? “We’ve learned the effects of TOPGUN on fighter pilot proficiency, and yet today we don’t have the equivalent (for infantry squads),” Scales said. “What if you had a small unit TOPGUN?”

Scales envisions training simulators at every level, from a specialized training centers like the Army’s Fort Irwin down to individual units at their home bases. While training in the field is still vital, it’s limited by its expense, the time it takes to set up, and the physical terrain. In real life, you can’t train your troops in the woods one day and then the next day build a full-scale megacity in the same place, but in virtual reality, you can switch from forest to urban with a few keystrokes. VR allows a variety of environments, adversaries, and tactical situations that real-world training does not — and it allows you to experience them over and over again.

A typical infantry unit gets to do a “force on force” exercise against a thinking, reacting opponent a handful of times a year, Scales said. VR training allows such exercises “hundreds of times a year, he said, “so a straight leg (infantry trooper) can have just as much opportunity to become as proficient as an operator in Delta Force.”


A V-22 lands Marines in Helmand.



Training is not the only way in which Scales sees regular infantry becoming more like special operators. He also wants the infantry to get its pick of personnel — and to be exempted from Pentagon policies that make it hard to train expert teams and keep them together.

“SEALs, DELTA, and Rangers are selected from the top mental categories. Not so for our straight leg close combat units, both Army and Marine,” Scales said. “Why not? We know from experience, many years of experience, that intelligent soldiers are far more effective and far less likely to become casualties.”

Older infantrymen are more effective as well, because they’re more experienced, skilled and emotionally mature than 18-year-olds. Judging from special operations experience, “the optimum age for a close combat soldier is between 28 and 32,” Scales said. “What if we recruited soldiers for close combat at the beginning of their second tour,” he said, after they’d already proven themselves — and gained useful skills — in a less brutal job such as radioman or medic?

Once you select these elite infantry, you have to keep them. Time to train together contributes both to skills — practicing “team plays,” not just individual skills — and to morale — building the “band of brothers” loyalties that motivate troops to fight.

“Units have to stay together a long period of time in order to be bonded,” Scales said. “Think about a squad that stays together four or five years.”

That infantry squad should also be manned at more than 100 percent strength, Scales argued, so it can take casualties and still have enough manpower. If the optimum infantry squad size is nine, for instance, assign 11 troops to each squad. The assault squads on D-Day were overmanned this way, for example, and the Rangers do something similar today. “The only way to maintain combat proficiency when the bullets start to fly is to go into combat over-manned,” Scales said.

All these reforms go against the grain of Pentagon policy, Scales admits. So exempt the infantry from those policies, he says, much as the legendary Admiral Rickover did to get nuclear engineers excepted in the Navy. “Only 4 percent [of the force] go out every day with the intended purpose of direct, eyeball to eyeball killing,” Scales said. “We need to promote them differently. We need to select them differently. We need to train them differently.”



from: https://breakingdefense.com/2018/03/top-gun-for-grunts-mattis-may-revolutionize-infantry/




A Recon Drone In Every US Marine Corps Squad

In the spring of 2013, over drinks somewhere in Washington, a Marine officer candidate told me about an idea he had. He was getting ready for a wargame — a weekend in the field simulating a combat engagement — and he wanted to use his discretionary budget for the activity to buy a drone.

Now, the military has plenty of drones, from Global Hawks on down, but nothing currently in the inventory could match both the capability and the disposability he wanted. So he was going to buy a camera drone kit for less than $1,000, and try to field it in the wargame.

I never made it out to see how the drone faired, but five years later, adding a drone into a Marine squad is no longer just the stuff of daring candidates looking for an edge over their peers in friendly wargames. Last month, Marines in the California desert practiced with quadcopters of their own.

From USNI News:


The Marine Corps is beginning to field the InstantEye to infantry units across the Fleet Marine Force in a program dubbed “Quads for Squads,” or Q4S as Marines call it, and endorsed by the commandant. Gen. Robert Neller wants every deploying squad — a unit with 11 to 14 Marines — to be equipped with an organic, small UAS capability.

Later this month, Marines with 2nd Battalion, 4th Marines, 5th Marine Regiment will receive and train with the quadcopters at its Camp Pendleton, Calif.-base, 2nd Lt. Sam Banks, a 1st Marine Division spokesman, told USNI News.

“We’re just pushing it out at a much faster rate,” Banks said of the drone systems. “They’re going to all of the deploying battalions first.”


The InstantEye will hardly be the first quadcopter tested by the military, nor will it be the first small drone. Perhaps the most iconic drone carried by infantry is the RQ-11 Raven, a hand-tossed fixed-wing designed to break apart like LEGOs when it landed. The Ravens had a unit price of $35,000, and were sold with all the relevant equipment to operate a batch of three for $250,000. That’s bargain basement when compared to the price of other aircraft, but still pricey in the world of small drones.


Staff Sgt. Juan-Ricardo Ortega, an infantry unit leader with Marine Corps Security Forces Battalion, inspects an unmanned aerial vehicle after a landing during a UAV training course aboard Camp Pendleton, Calif., Jan. 31, 2014. The UAV course is two weeks long and focuses on familiarizing Marines and sailors with the equipment primarily through practical application. (U.S. Marine Corps photo by Lance Cpl. Keenan Zelazoski/ Released)

Marine Infantry Leader Inspect Raven Drone
The hand-tossed Raven is a small scout drone.
Keenan Zelazoski, USMC, via Wikimedia Commons


To get scouting capabilities dirt cheap, the Marines and the Army have bought and tested commercial off-the-shelf quadcopters, like DJI Phantoms. These quadcopters typically run only a few hundred dollars, making them as cheap as a functional drone can get. The cost savings wasn’t worth the potential risk of the drones being built compromised, so citing cyber security concerns last summer, the U.S. Army ordered units to stop using drones from China-based manufacturer DJI.

The InstantEyes quadcopters, meanwhile, are built by InstantEye Robotics of Andover, Massachusetts. And while InstantEye’s announcent of a recent sale of 800 quadcopters to the Marine Corps didn’t include unit price for the drones, they were reported as costing around $1,000 in 2014. If the cost is similar, then it means that the Marines might finally have a squad-based aerial scout that matches or exceeds those of rivals in both capability and disposability.

And it means the Marines are now much closer to Commandant Neller’s vision of a drone in every squad.


from: https://www.c4isrnet.com//newsletters/unmanned-systems/2018/03/09/training-with-squadrones-its-happening-for-the-marines/


With a wingspan of 4.5 feet and a weight of 4.2 pounds, the hand-launched Raven provides aerial observation, day or night, at line-of-sight ranges up to 10 kilometers. The Raven, now available with an optional stabilized gimbaled payload, delivers real-time color or infrared imagery to the ground control and remote viewing stations.

Product Specs

Payloads Dual Forward and Side-Look EO Camera Nose, Electronic Pan-tilt-zoom with Stabilization, Forward and Side-Look IR Camera Nose (6.5 oz payloads)
Range 10 km
Endurance 60-90 minutes
Speed 32-81 km/h, 17-44 knots
Operating Altitude (Typ.) 100-500 ft (30-152 m) AGL, 14,000 ft MSL max launch altitude
Wing Span 4.5 ft (1.4 m)
Length 3.0 ft (0.9 m)
Weight 4.2 lbs (1.9 kg)
GCS Common GCS with Puma and Wasp® AE
Launch & Recovery Method Hand-Launched, Deep Stall Landing

Payload Types

Gimbaled EO/IR cameras with IR illuminator. Digital stabilization. Continuous pan, +10 to -90 degree tilt.

Dual forward and side-look EO cameras. Digital stabilization electronic pan-tilt-zoom.

Optional: Side-look IR Camera, forward-look IR camera.


Daytime Video taken by the raven Drone:



Nighttime Video taken by the raven Drone:


Marine Corps Commandant Wants A Drone In Every Squad

What distinguishes the wars of the 21st century from the wars of the 20th, or any century prior? So far, it isn’t the venue, the countries involved, or even the kind of fighting: Nations have fought against insurgencies since there were nations for people to rebel against. If there’s a dominant, distinctive trend in modern warfare, it’s the adaptation of new, especially commercial, technology to the battlefield.

To wit: Marine Corps Commandant Gen. Robert Neller said, according to the Marine Corps Times “At the end of next year, my goal is every deployed Marine infantry squad has got their own quad copter.”

Neller was speaking at the Modern Day Marine expo in Quantico, Virginia. As the [Marine Corps Times reports]:


“He would be the Marine that would fly the squad’s UAVs [unmanned aerial vehicles] and help the squad leader manage the information,” Neller said at the Center for Strategic and International Studies think tank. “We’re going to find out: Can the squad leader handle all of that.”


The basic unit of the Marine Corps since its inception, according to mythology as much as practice, is the rifleman. Rifles remain the standard infantry weapon of choice, but as battlefields have evolved and weapons technology with them, adding machine guns and grenade launchers as needed. The drone pilot, undoubtedly carrying a rifle as well, would add not a new weapon but a new set of information using the squad’s quadcopter (“Squadcopter,” if you will).



The military already has small drones, ranging from the palm-sized Black Hornet to the hand-tossed Raven. But these are either, in the case of the Black Hornet, mostly the domain of Special Forces, or like the Raven, kept at the Company level, and neither of these drones are cheap. Ravens cost at least $250,000 a system, when set up to military specifications.

Yet drones, especially ones deployed at the squad level, don’t have to be expensive. Commercial quadcopters and toy drones have seen use in the Ukrainian civil war, and there, soldiers trying to shoot them down with rifles.

If the squadcopter is there to scout in short flights, then small drones that fly for 20 or so minutes and stream video back to a pilot could likely do the job, and there are a bunch of drones like that priced around $1000. (Dronemaker DJI debuted one just this week). Neller’s ambition of a quadcopter for every squad certainly sounds ambitious, but it’s not out of the realm of possibility.

Even if the Marines don’t get a drone for every squad, they may someday have to contend with a foe that does. The Army is only just starting to look into anti-squadcopter defense. It would make a lot of sense for the Marines to start exploring how to use squadcopters on offense.


from: https://www.popsci.com/marine-corps-commandant-wants-drone-in-every-squad#page-2



AeroVironment Nano Hummingbird
– Indoor & Outdoor Flight Video

Named one of the “50 Best Inventions of 2011” by TIME Magazine






By continuing to use this site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.