Social Engineering

 

 

https://adminsm.asisonline.org/Pages/Artful-Manipulation.aspxArtful ManipulationGP0|#91bd5d60-260d-42ec-a815-5fd358f1796d;L0|#091bd5d60-260d-42ec-a815-5fd358f1796d|Cybersecurity;GTSet|#8accba12-4830-47cd-9299-2b34a43444652018-09-01T04:00:00Z<p></p><p>Chief financial officer Malcolm Fisher never thought he would be victimized by cybercrime—until a social engineer successfully impersonated him and bilked his company out of more than $125,000. </p><p>It was relatively easy for the criminal to identify Fisher as a high-value target given his key position within the company—his bio was readily available on the company website. And Fisher's social media profiles on Facebook, Twitter, and LinkedIn revealed several bits of information that marked him as a dream target for a diligent social engineer.   </p><p>Fisher frequently participated in poker tournaments and was not modest in describing his success at the table. He posted about attending an upcoming tournament in Las Vegas and catalogued his travel plans across social media platforms. Shortly after his arrival to Las Vegas, Fisher received a text message from what appeared to be the tournament organizer providing a link to the updated schedule. When he clicked on the link, nothing seemed to happen—but he had just unwittingly provided the social engineer with entry into his company-issued mobile device. </p><p>Knowing that the tournament started at 11 the next morning, the fraudster hijacked Fisher's email account and sent an urgent message at 11:15 a.m. to a colleague. The email—supposedly written by Fisher—instructed the employee to immediately wire $125,000 to a vendor, noting that he would be out of touch for several hours because he was attending the tournament. </p><p>The employee, never questioning his boss's instructions, immediately processed the wire transfer. While Fisher left Las Vegas very pleased with his tournament winnings, he soon learned that he was the one who got played.   </p><p>This scenario is not unusual. With more focus than ever on enterprise cybersecurity and preventing data breaches, many executives believe that technology alone provides sufficient protection against such threats. </p><p>But sophisticated threat actors—whether they be nation states, criminals, activists, or disloyal competitors—will frequently target the most significant vulnerability found in most organizations: the human factor. The interaction between human beings and the technology meant to protect the organization is frequently referred to as the weakest link in security.</p><p>The most common method used by these threat actors to exploit the human factor vulnerability is social engineering. In fact, according to the 2018 Verizon Data Breach Investigations Report, more than 90 percent of successful security breaches start with some aspect of social engineering.  </p><p>Social engineering is the skillful manipulation of organizational insiders to undertake certain actions of interest to the social engineer. Insiders are not only employees of the organization—they include anyone who may have unescorted access into a target organization, including service providers such as the guard force, cleaning crews, catering companies, vending machine stockers, maintenance contractors, and more.</p><p>Greater awareness and insight into this process provides a better opportunity to mitigate the risk of social engineering attacks.   </p><h4> Collecting the Data</h4><p>Prior to launching any type of attack against the target, a professional social engineer will spend time collecting available open source information. While such collection may be from a variety of resources, the most frequent medium is simple online research. </p><p>Almost every organization has a website with information about the company, its products and services, executive profiles, press releases, contact information, and career opportunities. <br></p><p>While all such sections may provide useful information to a social engineer, executive profiles—which often contain full names, titles, pictures, and a brief biographic sketch—provide considerable insight into key insiders and where they fit into the organizational structure. </p><p>Career opportunities, along with company contact information, provide exploitable details and a portal through which a social engineer may seek direct or indirect contact with the organization.        </p><p><strong>Job postings and reviews. </strong>Whether posted on the organization's website or advertised on online job boards, job postings can provide a wealth of information. At a bare minimum, such postings will usually reveal the basic preferred IT qualifications sought from an applicant, providing valuable insight into the operating systems and software programs the organization uses. The job description might also provide insight concerning potential expansion of the organization, whether it be geographically or through a new product or service.  </p><p>With a job posting, an organization is inviting contact with someone from the outside. It provides social engineers an opportunity to electronically submit a cover letter or resume—either directly through human resources or to someone else within the organization chosen by the social engineer to forward the resume onward. The email, along with attachments, can be a medium to introduce malware into the target's system. </p><p>While less frequently exploited, such job postings can also create opportunities for social engineers to interview with the employer and elicit sensitive information. </p><p>Employer review sites such as Glassdoor can provide useful workplace insights posted by employees. These reviews inform the social engineer about the pulse regarding the morale within the organization. Generally, it is much easier to manipulate a disgruntled employee than someone who is happy and loyal to his or her employer.  </p><p><strong>Social media and search engines</strong>. While an organization may aggressively use social media to help promote their products and services, an unintended consequence can be the leakage of exploitable information. </p><p>Employees often upload photographs of themselves and coworkers in the workplace, revealing information about physical workspaces to include actual floor plans, office configurations, security system hardware, IT systems, employee badges, or employee dress. Much of this information can be extremely useful if planning an actual physical intrusion into the company.    </p><p>Creative Google searches will take the social engineer well beyond the most popular entries surfaced regarding the organization's name. </p><p>For example, a simple yet creative search of the company's name and the words "pdf" or "confidential" may surface documents such as employee manuals, employee benefit packages, IT user guides, or contracts. These searches can identify companies subcontracted by the target company for services such as janitorial, trash disposal, security, catering, or temporary staff. </p><p>A search for public court records will provide access to nationwide criminal and civil court documents. These documents will frequently contain operational details regarding the target company or officials that the company would have preferred to maintain confidential.  </p><p>A common misconception regarding the Internet is that once a company has deleted or modified information previously contained on its corporate website, the original information is no longer available. This is false. </p><p>The Wayback Machine is a digital archive of the World Wide Web and enables users to see archived versions of web pages as far back as 1996. Even if an organization's new security director decided to remove potentially sensitive information from the entity's website, the social engineer can attempt to use the Wayback Machine to retrieve it.  </p><p>Sites such as Google Maps help the social engineer virtually conduct reconnaissance—if the social engineer considered launching an intrusion into target offices, he or she would want to learn as much as possible about access points, access control including badge readers or other access systems, surveillance cameras, and guards. </p><p>The social engineer could also use the maps to identify businesses near the target location that employees may frequent and orchestrate a run-in, resulting in a onetime casual conversation with an employee to carefully gather information not available via open source. It could also be an opportunity to develop an employee for use as a future insider source. </p><p>A second potential objective for the reconnaissance is the identification of locations in the vicinity that make deliveries to the target's office, such as flower shops or restaurants. With this information in hand, the social engineer may decide to impersonate someone making a delivery to obtain unescorted access onto the premises. </p><p><strong>Insiders. </strong>Beyond collecting information on the organization, social engineers also target insiders in these entities. There could literally be several thousand employees in a medium to large organization, but the social engineer only needs to collect useful data on one or more well-placed individuals. </p><p>He or she will want to know as much as possible about targeted insiders' personal and professional backgrounds, as well as an indication of what their motivations may be. With this information in hand, the social engineer can better manipulate them.  </p><p>The most common starting point for data collection on insiders is through social media sites. While there are hundreds of such sites bringing together more than 3.3. billion users, social engineers will typically use sites providing the most prolific information.   </p><p>Facebook can be used to find pictures of a targeted insider and their network of contacts. Here one can learn where the targets live, their age and birthdate, where they went to school, their hobbies and interests, and past and future travel plans. When faced with a target who may enact privacy settings, the resourceful social engineer will turn to the accounts of the target's spouse or children that may lack such privacy settings.    </p><p>Twitter can provide play-by-play action of where the target is and what they are doing at that moment. And on LinkedIn, a social engineer will learn about the target's professional, academic, and work profile; professional interests; and network of contacts.​</p><h4>Manipulating Targets</h4><p>Social engineers use four types of attack vectors to scam companies out of money, intellectual property, or data.</p><p><strong>Phishing. </strong>Phishing currently represents more than 90 percent of all social engineering attacks. This includes typical spam emails requesting that the recipient click a link or open an attachment embedded in the email, which could lead to the downloading of malicious tools that could potentially compromise the recipient's computer, if not the entire IT network. </p><p>While such emails do not target specific people and are literally sent out by the thousands, even a small percentage of recipient victims who click on the link may provide the sender with a viable return on investment. </p><p>Professional social engineers will use spear phishing, which effectively tailors the email to a specific target leveraging information previously gleaned from data collection. This will greatly enhance the likelihood that the chosen target will click on the link or open the attachment. </p><p>Another variation would involve the social engineer creating a fictitious LinkedIn account and engaging the target on a specific issue. If the target has a tendency of not accepting invitations from unknown individuals, the social engineer will first invite the target's peers to connect. Then, when the target sees that several of his industry peers are already connected to this fictitious profile, he will also likely accept. </p><p>Once successfully linked, the social engineer will exchange a few emails with the target, leading to one hosting the link or attachment containing the malware. As their previous exchanges have likely resulted in the building of rapport and trust, the target will likely fall vulnerable to the attack.    </p><p><strong>Smishing. </strong>This technique is similar to phishing, but instead of using email as a medium to deliver the attack, the social engineer will send a link or attachment via text message. The result is the same. While smishing is not yet as common as its phishing cousin, it is expected to begin mirroring trends in mass marketing, which is moving more and more to SMS due to the high open rates.  </p><p><strong>Vishing.</strong> For professional social engineers, vishing can be fun and exhilarating. While requiring a little more skill, vishing is typically much more effective than the previously mentioned techniques. Here the social engineer will telephone the target using any one of several ploys or pretexts. To increase credibility, the social engineer will spoof the call and manipulate the caller ID seen on recipient's end.  </p><p>Say a social engineer wants to collect protected information regarding the status of a new product at a target company headquartered in Chicago. Posing as a new assistant to the company's vice president of operations, the social engineer will call the operations manager for one of the target firm's laboratories in Los Angeles. </p><p>To add credibility, the social engineer will spoof the call, making it appear as though the telephone number is from the vice president's Chicago office. She will state that the vice president is making final preparations for a meeting about to take place and urgently needs updates on the product's rollout date and expenditures compared to budgeted figures. As the request appears to be genuinely coming from someone in a position of authority, combined with urgency, the social engineer will likely be successful. </p><p><strong>Direct intrusion. </strong>While considered the most difficult of the four techniques to execute, this is usually the most successful. It involves face-to-face interaction with the target. </p><p>The social engineer can choose from a variety of pretexts for attempting this contact, including posing as someone with an appointment inside of the building, IT support, a fire inspector conducting a survey, or a member of contracted service providers. </p><p>The social engineer could easily pose as someone making a delivery of a package requiring the recipient's signature, even going so far as to procure a FedEx or UPS uniform online. After reviewing the identified locations near the target facility, the social engineer could also pose as someone making a delivery of flowers, office supplies, or fast food. </p><p>Once inside the facility with unescorted access, the social engineer may emplace listening devices in conference rooms or keyboard loggers to capture specific information, such as network usernames and passwords. </p><p>How difficult would it be for a social engineer to leave several thumb drives around the premises marked "Confidential Payroll?" Betting on the nature of human curiosity, the social engineer would expect that at least one of the employees would find and insert one of the drives into the computer, hoping to see what compensation others are receiving in the company. When they do, the social engineer is successful in uploading malicious files, potentially compromising the network.  </p><p>Another successful ploy involves the social engineer posing as an executive recruiter. Without a need to divulge the name of a specific client, the "recruiter" can directly contact the target insider, saying that they were impressed by the insider's professional background as seen on LinkedIn and believe that the target may be a great candidate for an attractive position they are trying to fill. </p><p>Feeling nothing to lose, the target will frequently allow the social engineer, either over the telephone or during a personal meeting, to elicit considerable information regarding the target's own background, as well as confidential information regarding current and past employers.        </p><h4>​Influence Techniques</h4><p>Perhaps the main character trait that makes humans so vulnerable to a social engineering ploy is the tendency to blindly trust everyone, even people they do not know. This blind trust can be fatal to an organization's security posture. It is this trust that makes it easy for social engineers to convince their victims that they are whoever they pretend to be.  </p><p>In addition to leveraging trust, professional social engineers will also exploit any number of influence techniques. As victims are more likely to assist someone they find to be pleasant, the social engineer will attempt to develop strong personal rapport prior to making the request. Similarly, if the social engineer conducts a significant courtesy or kind deed for the victim, the target will often feel a strong sense of obligation to reciprocate by performing a deed for the social engineer.  </p><p>Victims are more likely to comply if they believe that the request is coming from someone in authority, or if the social engineer pressures the target by implying that refusing to assist will be seen by others as socially unacceptable. Another tactic involves the social engineer asking for something that the victim initially finds implausible to comply with. The victim will subsequently agree to comply with a request from the social engineer which appears to be meeting halfway. </p><p>The social engineer may also take advantage of the perception of scarcity, putting pressure on the victim to make a quick decision as the perceived window of opportunity for the victim is about to close.  ​</p><h4>Mitigating Attacks</h4><p>There are basic measures that can significantly lower the risk that an organization will be victimized. </p><p>First, the amount of unnecessary, yet exploitable, data about organizations that can be found online needs to be minimized. In addition to establishing clear policies regarding what employees can post online regarding the organization, there must be someone responsible to periodically scan key sites to ensure compliance. The more data available to social engineers, the more likely the organization will be on a list of targets. </p><p>While unenforceable, this same practice should be encouraged among the organization's employees regarding the personal information they post on social media.      </p><p>A second measure is establishing social engineering awareness training within the organization. Such training will sensitize employees to recognize potential social engineering attacks and what specific actions they should take. </p><p>Warning signs of a potential social engineer at work may involve a caller refusing to give a callback number, making an unusual request, or showing discomfort when questioned. Employees should also take note if a caller makes claims of authority, stresses urgency, or threatens negative consequences if the employee doesn't act. And if a caller engages in name dropping, flirting, or complimenting, that could be a red flag as well.</p><p>Once alerted, employees need to know what actions to take—simply not complying with the social engineer's request is not enough. Organizations need to have a system in place where the employee can promptly bring such attacks to the attention of security, via incident reports.  </p><p>Employees need to receive this type of training on a periodic basis, ideally annually. To be truly effective, the training should be accompanied by social engineering penetration testing, which mimics potential ploys used by threat actors to breach the organization's security. </p><p>By conducting a social engineering awareness campaign, employees will remain alert to such threats and undertake appropriate actions, thereby decreasing existing vulnerabilities. </p><p>In all interactions—whether via email, text, over the phone, or in person—employees must first verify that the person is who they say they are and that they have a legitimate request. Remember this slogan: verify before trusting. n</p><p><em>Peter Warmka, CPP, is director of business intelligence for Strategic Risk Management and an adjunct professor for Webster University's cybersecurity masters program. He is a frequent speaker on social engineering threats at conferences for trade associations and wealth management advisory firms. Warmka is a member of ASIS International.</em></p>

Social Engineering

 

 

https://adminsm.asisonline.org/Pages/Artful-Manipulation.aspx2018-09-01T04:00:00ZArtful Manipulation
https://adminsm.asisonline.org/Pages/Attacks-on-the-Record.aspx2018-06-01T04:00:00ZAttacks on the Record
https://adminsm.asisonline.org/Pages/How-to-Hack-a-Human.aspx2018-01-01T05:00:00ZHow to Hack a Human
https://adminsm.asisonline.org/Pages/A-New-Social-World.aspx2017-12-01T05:00:00ZA New Social World
https://adminsm.asisonline.org/Pages/The-Internet-And-The-Future-of-Online-Trust.aspx2017-08-11T04:00:00ZThe Internet And The Future of Online Trust
https://adminsm.asisonline.org/Pages/DHS-Official-Says-Russia-Tried-to-Hack-21-States-in-2016-Election.aspx2017-06-21T04:00:00ZDHS Official Says Russia Tried to Hack 21 States in 2016 Election
https://adminsm.asisonline.org/Pages/Most-U.S.-Hospitals-Have-Not-Deployed-DMARC-To-Protect-Their-Email-Systems.aspx2017-06-16T04:00:00ZMost U.S. Hospitals Have Not Deployed DMARC To Protect Their Email Systems
https://adminsm.asisonline.org/Pages/Book-Review---Social-Media-Risk-and-Governance.aspx2016-11-01T04:00:00ZBook Review: Social Media Risk and Governance
https://adminsm.asisonline.org/Pages/Top-5-Hacks-From-Mr.-Robot.aspx2016-10-21T04:00:00ZThe Top Five Hacks From Mr. Robot—And How You Can Prevent Them
https://adminsm.asisonline.org/Pages/Spoofing-the-CEO.aspx2016-10-01T04:00:00ZSpoofing the CEO
https://adminsm.asisonline.org/Pages/Book-Review---Cybervetting.aspx2016-05-01T04:00:00ZBook Review: Cybervetting
https://adminsm.asisonline.org/Pages/How-to-Protect-PII.aspx2016-02-16T05:00:00ZHow to Protect PII
https://adminsm.asisonline.org/Pages/Smart-and-Secure.aspx2016-01-19T05:00:00ZSmart and Secure
https://adminsm.asisonline.org/Pages/Book-Review---Social-Crime.aspx2016-01-04T05:00:00ZBook Review: Social Crime
https://adminsm.asisonline.org/Pages/Book-Review---Online-Risk.aspx2015-12-01T05:00:00ZBook Review: Online Risk
https://adminsm.asisonline.org/Pages/La-Revolución-del-Internet-de-las-Cosas.aspx2015-11-12T05:00:00ZLa Revolución del Internet de las Cosas
https://adminsm.asisonline.org/Pages/The-IOT-Revolution.aspx2015-10-26T04:00:00ZThe IOT Revolution
https://adminsm.asisonline.org/Pages/Teach-a-Man-to-Phish.aspx2015-09-09T04:00:00ZTeach a Man to Phish
https://adminsm.asisonline.org/Pages/Communication-in-Crisis.aspx2015-09-01T04:00:00ZCommunication in Crisis
https://adminsm.asisonline.org/Pages/Ediscovery-and-the-Security-Implications-of-the-Internet-of-Things.aspx2015-04-13T04:00:00ZEdiscovery and the Security Implications of the Internet of Things

 You May Also Like...

 

 

https://adminsm.asisonline.org/Pages/How-to-Hack-a-Human.aspxHow to Hack a Human<p>​It all started innocuously with a Facebook friend request from an attractive woman named Mia Ash. Once her request was accepted, she struck up a conversation about various topics and showed interest in her new friend's work as a cybersecurity expert at one of the world's largest accounting firms.</p><p>Then, one day Mia shared her dream—to start her own company. She had one problem, though; she did not have a website and did not know how to create one. Surely her new friend could use his expertise to help her achieve her dreams by helping her make one? </p><p>Mia said she could send him some text to include on the new site. He agreed, and when he received a file from Mia he opened it—on his work computer. That simple act launched a malware attack against his company resulting in a significant compromise of sensitive data.</p><p>Mia was not a real person, but a care- fully crafted online persona created by a prolific group of Iranian hackers—known as Oilrig—to help this elaborate spear phishing operation succeed. </p><p>Due to his role in cybersecurity, the target was unlikely to have fallen for a standard phishing attack, or even a normal spear phishing operation. He was too well trained for that. But nobody had prepared him for a virtual honey trap, and he fell for the scheme without hesitation.</p><p>This case is a vivid reminder that when cybersecurity measures become difficult to penetrate by technical means, people become the weakest link in a cybersecurity system. It also illustrates how other intelligence tools can be employed to help facilitate cyber espionage.</p><p>While many hackers are merely looking to exploit whatever they can for monetary gain, those engaging in cyber espionage are different. They are often either working directly for a state or large nonstate actor, or as a mercenary contracted by such an actor tasked with obtaining specific information.</p><p>This targeted information typically pertains to traditional espionage objectives, such as weapons systems specifications or the personal information of government employees—like that uncovered in the U.S. Office of Personnel Management hack. </p><p>The information can also be used to further nondefense-related economic objectives, such as China's research and design 863 program, which was created to boost innovation in high-tech sectors in China. </p><p>Given this distinction and context, it is important to understand that hacking operations are just one of the intelligence tools sophisticated cyber espionage actors possess. Hacking can frequently work in conjunction with other intelligence tools to make them more efficient.</p><p>Hacking into the social media accounts or cell phone of a person targeted for a human intelligence recruitment operation can provide a goldmine of information that can greatly assist those determining the best way to approach the target. </p><p>For instance, hacking into a defense contractor's email account could provide important information about the date, time, and place for the testing of a revolutionary new technology. This information could help an intelligence agency focus its satellite imagery, electronic surveillance, and other collection systems on the test site.</p><p>Conversely, intelligence tools can also be used to enable hacking operations. Simply put, if a sophisticated cyber espionage actor wants access to the information contained on a computer system badly enough, and cannot get in using traditional hacking methods, he or she will use other tools to get access to the targeted system. A recent case in Massachusetts illustrates this principle.</p><p>Medrobotics CEO Samuel Straface was leaving his office at about 7:30 p.m. one evening when he noticed a man sitting in a conference room in the medical technology company's secure area, working on what appeared to be three laptop computers.</p><p>Straface did not recognize the man as an employee or contractor, so he asked him what he was doing. The man replied that he had come to the conference room for a meeting with the company's European sales director. Straface informed him that the sales director had been out of the country for three weeks.</p><p>The man then said he was supposed to be meeting with Medrobotics' head of intellectual property. But Straface told him the department head did not have a meeting scheduled for that time. </p><p>Finally, the man claimed that he was there to meet the CEO. Straface then identified himself and more strongly confronted the intruder, who said he was Dong Liu—a lawyer doing patent work for a Chinese law firm. Liu showed Straface a LinkedIn profile that listed him as a senior partner and patent attorney with the law firm of Boss & Young. </p><p>Straface then called the police, who arrested Liu for trespassing and referred the case to the FBI. The Bureau then filed a criminal complaint in the U.S. District Court for the District of Massachusetts, charging Liu with one count of attempted theft of trade secrets and one count of attempted access to a computer without authorization. After his initial court appearance, Liu was ordered held pending trial.</p><p>Straface caught Liu while he was presumably attempting to hack into the company's Wi-Fi network. The password to the firm's guest network was posted on the wall in the conference room, and it is unclear how well it was isolated from the company's secure network. It was also unknown whether malware planted on the guest network could have affected the rest of the company's information technology infrastructure.</p><p>The fact that the Chinese dispatched Liu from Canada to Massachusetts to conduct a black bag job—an age-old intelligence tactic to covertly gain access to a facility—indicates that it had not been able to obtain the information it desired remotely.</p><p>China had clear interest in Medrobotics' proprietary information. Straface told FBI agents that companies from China had been attempting to develop a relationship with the company for about 10 years, according to the FBI affidavit. Straface said he had met with Chinese individuals on about six occasions, but ultimately had no interest in pursuing business with the Chinese.</p><p>Straface also noted that he had always met these individuals in Boston, and had never invited them to his company's headquarters in Raynham, Massachusetts. This decision shows that Straface was aware of Chinese interest in his company's intellectual property and the intent to purloin it. It also shows that he consciously attempted to limit the risk by keeping the individuals away from his facilities. Yet, despite this, they still managed to come to the headquarters.</p><p>Black bag attacks are not the only traditional espionage tool that can be employed to help facilitate a cyberattack. Human intelligence approaches can also be used. </p><p>In traditional espionage operations, hostile intelligence agencies have always targeted code clerks and others with access to communications systems. </p><p>Computer hackers have also targeted humans. Since the dawn of their craft, social engineering—a form of human intelligence—has been widely employed by hackers, such as the Mia Ash virtual honey trap that was part of an elaborate and extended social engineering operation.</p><p>But not all honey traps are virtual. If a sophisticated actor wants access to a system badly enough, he can easily employ a physical honey trap—a very effective way to target members of an IT department to get information from a company's computer system. This is because many of the lowest paid employees at companies—the entry level IT staff—are given access to the company's most valuable information with few internal controls in place to ensure they don't misuse their privileges.</p><p>Using the human intelligence approaches of MICE (money, ideology, compromise, or ego), it would be easy to recruit a member of most IT departments to serve as a spy inside the corporation. Such an agent could be a one-time mass downloader, like Chelsea Manning or Edward Snowden. </p><p>Or the agent could stay in place to serve as an advanced, persistent, internal threat. Most case officers prefer to have an agent who stays in place and provides information during a prolonged period of time, rather than a one-time event.</p><p>IT department personnel are not the only ones susceptible to such recruitment. There are a variety of ways a witting insider could help inject malware into a corporate system, while maintaining plausible deniability. Virtually any employee could be paid to provide his or her user ID and password, or to intentionally click on a phishing link or open a document that will launch malware into the corporate system. </p><p>An insider could also serve as a spotter agent within the company, pointing out potential targets for recruitment by directing his or her handler to employees with marital or financial issues, or an employee who is angry about being passed over for a promotion or choice assignment.</p><p>An inside source could also be valuable in helping design tailored phishing attacks. For instance, knowing that Bob sends Janet a spreadsheet with production data every day, and using past examples of those emails to know how Bob addresses her, would help a hacker fabricate a convincing phishing email.</p><p>Insider threats are not limited only to the recruitment of current employees. There have been many examples of the Chinese and Russians recruiting young college students and directing them to apply for jobs at companies or research institutions in which they have an interest.</p><p>In 2014, for instance, the FBI released a 28-minute video about Glenn Duffie Shriver—an American student in Shanghai who was paid by Chinese intelligence officers and convicted of trying to acquire U.S. defense secrets. The video was designed to warn U.S. students studying abroad about efforts to recruit them for espionage efforts.</p><p>Because of the common emphasis on the cyber aspect of cyber espionage—and the almost total disregard for the role of other espionage tools in facilitating cyberattacks—cyber espionage is often considered to be an information security problem that only technical personnel can address. </p><p>But in the true sense of the term, cyber espionage is a much broader threat that can emanate from many different sources. Therefore, the problem must be addressed in a holistic manner. </p><p>Chief information security officers need to work hand-in-glove with chief security officers, human resources, legal counsel, and others if they hope to protect the companies and departments in their charge. </p><p>When confronted by the threat of sophisticated cyber espionage actors who have a wide variety of tools at their disposal, employees must become a crucial part of their employers' defenses as well. </p><p>Many companies provide cybersecurity training that includes warnings about hacking methods, like phishing and social engineering, but very few provide training on how to spot traditional espionage threats and tactics. This frequently leaves most workers ill prepared to guard themselves against such methods. </p><p>Ultimately, thwarting a sophisticated enemy equipped with a wide array of espionage tools will be possible only with a better informed and more coordinated effort on the part of the entire company.  </p><h4>Sidebar: The Mice and Men Connection</h4><p> </p><p>The main espionage approaches that could be used to target an employee to provide information, network credentials, or to introduce malware can be explained using the KGB acronym of MICE.</p><p>M = Money. In many cases, this does equal cold, hard cash. But it can also include other gifts of financial value—travel, jewelry, vehicles, education, or jobs for family members. Historic examples of spies recruited using this hook include CIA officer Aldrich Ames and the Walker spy ring.</p><p>A recent example of a person recruited using this motivation was U.S. State Department employee Candace Claiborne, who the U.S. Department of Justice charged in March 2017 with receiving cash, electronics, and travel for herself from her Chinese Ministry of State Security handler, as well as free university education and housing for her son.</p><p>I = Ideology. This can include a person who has embraced an ideology such as communism, someone who rejects this ideology, or who otherwise opposes the actions and policies of his or her government.</p><p>Historical examples of this recruitment approach include the Cambridge five spy ring in the United Kingdom and the Rosenbergs, who stole nuclear weapons secrets for the Soviet Union while living in the United States.</p><p>One recent example of an ideologically motivated spy is Ana Montes, who was a senior U.S. Defense Intelligence Agency analyst recruited by the Cuban DGI, who appealed to her Puerto Rican heritage and U.S. policies toward Puerto Rico. Another ideologically motivated spy was Chelsea Manning, a U.S. Army private who stole thousands of classified documents and provided them to WikiLeaks.</p><p>C = Compromise. This can include a wide range of activities that can provide leverage over a person, such as affairs and other sexual indiscretions, black market currency transactions, and other illegal activity. It can also include other leverage that a government can use to place pressure on family members, like imprisoning them or threatening their livelihood.</p><p>Historic examples of this approach include U.S. Marine security guard Clayton Lonetree, who was snared by a Soviet sexual blackmail scheme—a honey trap—in Moscow, and FBI Special Agent James Smith who was compromised by a Chinese honey trap.</p><p>More recently, a Japanese foreign ministry communications officer hung himself in May 2004 after falling into a Chinese honey trap in Shanghai.</p><p>E = Ego. This approach often involves people who are disenchanted after being passed over for a promotion or choice assignment, those who believe they are smarter than everyone else and can get away with the crime, as well as those who do it for excitement.</p><p>Often, ego approaches involve one of the other elements, such as ego and money—"I deserve more money"—or ego and compromise—"I deserve a more attractive lover."</p><p>A recent example is the case of Boeing satellite engineer Gregory Justice, who passed stolen electronic files to an undercover FBI agent he believed was a Russian intelligence officer. While Justice took small sums of money for the information, he was primarily motivated by the excitement of being a spy like one of those in the television series The Americans, of which he was a fan.​</p><p>​<br></p><p><em><strong>Scott Stewart</strong> is vice president of tactical analysis at Stratfor.com and lead analyst for Stratfor Threat Lens, a product that helps corporate security professionals identify, measure, and mitigate risks that emerging threats pose to their people, assets, and interests around the globe.</em></p>GP0|#91bd5d60-260d-42ec-a815-5fd358f1796d;L0|#091bd5d60-260d-42ec-a815-5fd358f1796d|Cybersecurity;GTSet|#8accba12-4830-47cd-9299-2b34a4344465
https://adminsm.asisonline.org/Pages/Book-Review-Insider-Threats.aspxBook Review: Insider Threats<p>​Cornell University Press; cornellpress.cornell.edu; 216 pages; $89.95.</p><p>A collection of essays and case studies that originated in two workshops sponsored by the Global Nuclear Future Project of the American Academy of Arts and Sciences in 2011 and 2014, <em>Insider Threats</em> focuses on protecting the nuclear industry—but its lessons apply across many sectors.</p><p>The case studies are fascinating. A chapter devoted to the Fort Hood terrorist attack shows how changes in mission and procedures allowed information about the perpetrator to slip through the cracks. Instead of capturing warning signals, the systems scattered them. </p><p>Similar lessons were learned from the post–9/11 anthrax attacks in the United States. The author says that the suspect gained access to anthrax through “a complicated mix of evolving regulations, organizational culture, red flags ignored, and happenstance.”  </p><p>A real strength of this book is its root-cause analysis approach. Blame is rarely laid at the feet of incompetent people, but assigned to other factors like the unintended consequences of organizational design and known psychological tendencies. </p><p>The last chapter brings together all the lessons learned and cites 10 worst practices. For example, number seven is: “forget that insiders may know about security measures and how to work around them.” This chapter will be the most valuable to security practitioners because it offers a roadmap towards building an insider threat mitigation plan.</p><p><em>Insider Threats </em>is well-written, even literary. Its chief lesson: organizations are rarely designed to catch the insider, and much work needs to be done to protect them.</p><p><strong><em>Reviewer: Ross Johnson, CPP</em></strong><em>, is the senior manager of security and contingency planning for Capital Power, and infrastructure advisor for Awz Ventures. He previously worked as the security supervisor for an offshore oil drilling company in the Gulf of Mexico and overseas. Johnson is the author of Antiterrorism and Threat Response: Planning and Implementation.</em></p>GP0|#21788f65-8908-49e8-9957-45375db8bd4f;L0|#021788f65-8908-49e8-9957-45375db8bd4f|National Security;GTSet|#8accba12-4830-47cd-9299-2b34a4344465
https://adminsm.asisonline.org/Pages/A-Professional-Path.aspxA Professional Path<p>​Until recently, security has been considered a trade, with practitioners fighting for proper standing in the institutions they protect. But the industry is now at a crossroads.</p><p>Before us lie two paths. One is a continuation of the status quo. We may continue to glide down this road, but it is not a self-determined path. It has been chosen for us because we have not clearly defined security’s role. Given this failure to self-define, security has traditionally been defined by others by the task it performs, such as information security, investigations, physical security, or executive protection. This type of definition diminishes the value of the security function; our role is more than just our allocated tasks.</p><p>The second road is one of self-determination and opportunity. It offers a chance for the industry to advance from a trade to a fully respected profession. On this road, we can take control of the dialogue, shape the conversation surrounding our field, and make our own way forward. As an industry—with ASIS taking the lead—we can keep advancing until security is considered a profession.</p><p>How can we advance on this second road? First we need a clear definition of the role of security in the private sector. We also need a core base of knowledge that supports our understanding of that role, which can be taught—not only to college students, but to transitioning personnel coming into our industry and to our hiring managers. There also needs to be an established expectation that practitioners will share this knowledge of security’s role and the core competencies associated with it. </p><p>ASIS International has already started defining this role through the concept of enterprise security risk management (ESRM). With its embrace of ESRM, ASIS has positioned our industry to travel down the road of opportunity and self-determination, with ESRM as the guiding principle to help chart our course.  </p><p>Not everyone in the industry is ready for this journey, however. For some who may have heard of the concept but still find it vague, questions remain. Primarily: What exactly is ESRM and why is it needed?</p><h4>What is ESRM?</h4><p>At its core, ESRM is the practice of managing a security program through the use of risk principles. It’s a philosophy of management that can be applied to any area of security and any task that is performed by security, such as physical, cyber, information, and investigations. </p><p>The practice of ESRM is guided by long-standing internationally established risk management principles. These principles consist of fundamental concepts: What’s the asset? What’s the risk? How should you mitigate that risk? How should you respond if a risk becomes realized? What is your process for recovering from an event if a breach happens? Collectively, these principles form a thoughtful paradigm that guides the risk management thought process.</p><p>When pursued, these questions elicit valuable information, and they can be asked of every security-related task. For instance, investigations, forensics, and crisis management are all different security functions, but when they are discussed within the ESRM framework they are simply different types of incident response. </p><p>Similarly, every function of physical and information security, such as password and access management, encryption, and CCTV, is simply considered a mitigation effort within the ESRM paradigm. These may seem to be merely semantic differences, but they are important nuances. When we define these functions within the ESRM paradigm, we also start to define the role we play in the overall enterprise.</p><p>ESRM elevates the level at which the role of security management is defined. Instead of defining this role at task level, it defines the role at the higher, overarching level of risk management.  </p><p>By raising the level of security’s role, ESRM brings it closer to the C-suite, where executives are considering much more than individual tasks. And by defining the role through risk principles, it better positions the security function within the business world at large. Business executives in all fields understand risk; they make risk decisions every day. Using ESRM principles to guide our practice solidifies our place within the language of business while also defining the role we play within the business.</p><p>For example, consider a company with a warehouse and a server. In the warehouse, security is protecting widgets and in the server, security is protecting data. Under the common risk principles, we ask: What are the risks to the widgets and data?  How would we protect against those risks? Who owns the widgets, and who owns the data? </p><p>We may decide to put access control and alarms on the warehouse or a password and encryption on the data. In both instances, we’re protecting against intrusion. The goal is the same—protection. For each task, the skill set is different, just like skill sets differ in any other aspect of security: investigations, disaster response, information technology. But the risk paradigm is the same for each.</p><h4>Why We Need It</h4><p>We need ESRM to move beyond the tasks that security managers and their teams are assigned. For instance, if you manage physical security, your team is the physical security team. If you do investigations, you are an investigator. If you manage information security, your team is the information security team. </p><p>But these tasks merely define the scope of responsibility. Our roles are broader than our assigned tasks. Our responsibilities should be viewed not as standalone tasks, but as related components within our roles as security risk managers.   </p><p>Having a clear, consistent, self-defined role provides significant benefits. First, it preempts others from defining our role for us in a way that fails to adequately capture and communicate our value. </p><p>Second, it helps better position ourselves in the C-suite. C-level executives often struggle with what security managers do, and where to align us. This is often reflected in the frustrations expressed in some of our own conversations about needing a proverbial seat at the table. In one sense, this exclusion may seem justified: if we can’t define our role beyond describing our tasks, why would upper management charge us with higher-level leadership and strategy?</p><p>Third, it provides guidance to our industry. Greater use of ESRM will provide an always-maturing common base of knowledge, with consistent terms of use and clear expectations for success.  </p><p>This benefits not only practitioners in our industry, but also all other executives who may need to interact with the security practice or work with the security manager. This can be especially valuable during times of change, such as when a security manager switches companies or industries, or when new executives come into the security manager’s firm.</p><p>In those situations, security managers often feel that they are continually educating others on what they do. But this endless starting over process wouldn’t be necessary if there were a common understanding of what security’s role is, beyond the scope of its responsibilities.​</p><h4>Why Now?</h4><p>This industry at large has talked about ESRM for at least the last 10 years. But as relevant as the topic was a few years ago, the present moment is the right moment for ESRM because security risks now have the potential to become more disruptive to business than in the past.  </p><p>There are several reasons for this. The use of technology in the current economy has allowed businesses to centralize operations and practices. While this consolidation may have increased efficiency, it has also made those centralized operations more susceptible to disruption. When operations were more geographically dispersed, vulnerabilities were more spread out. Now, the concentrated risks may have a more serious negative impact to the business. </p><p>We are also moving beyond traditional information security and the protection of digitalized data. Now, cybersecurity risks pose threats of greater business disruption. For example, the threats within the cyber landscape to the Internet of Things (IoT) have the potential to cause more harm to businesses compared with the negative effects they suffered in the past due to loss of information.</p><p>Many executives understand the significance of these risks, and they are looking for answers beyond the typical siloed approach to security, in which physical security and information security are separately pursued. They realize that the rising cyber risks, in tandem with the increasing centralization of business operations, have caused a gap in security that needs to be closed. </p><p>Boards are also becoming more engaged, which means that senior management must also become engaged, and someone will have to step in and fill that gap. That could be a chief risk officer, a board-level committee, an internal audit unit…or security. Hopefully, it will be the latter, but to step up and meet this challenge, security professionals must be able to consistently define their role beyond simply defining their tasks. ​</p><h4>Making the Transition</h4><p>What we need is a roadmap toward professionalization.  </p><p>ASIS is leading the effort of defining security’s role through ESRM. At ASIS 2017 in Dallas, you will hear more conversation around ESRM as well as more maturity and consistency in that conversation.  As the leading security management professional organization, ASIS is best positioned to guide us through the roadmap from a trade to a profession. </p><p>The ASIS Board of Directors has made ESRM an essential component of its core mission. It has started incorporating ESRM principles into its strategic roadmap, which means that ASIS is starting to operationalize this philosophy—a critical step in building out this roadmap. Other steps will be needed; it is essential that volunteers, both seasoned and new to the field, embrace this shift towards professionalization for it to gain traction.</p><p>This transition will not occur with the flip of a switch. It will take dedication to challenge our own notions of how we perceive what we do, the language we use to communicate to our business partners, and our approach toward executing our functions.  It will take time and comprehensive reflection, and the ability to recognize when we don’t get it right. We may not be totally wrong either, but thoroughness in developing consistency is critical.</p><p>There are some core foundational elements that need to be in place for this ESRM transition to be successful. First, there needs to be a consistent base of knowledge for our industry to work from: a common lexicon and understanding of security’s role that is understood by practitioners and the business representatives we work with. </p><p>We also need both a top-down and bottom-up approach. New security practitioners entering the industry from business or academia, or transitioning from law enforcement or the military, need a comprehensive understanding of risk management principles and how a risk paradigm drives the security management thought process. There should be an expectation that these foundational skill sets are in place when someone enters the security field. Working from a common base of knowledge, these ESRM concepts should be incorporated into the security management curriculum, consistently established in every security certification, and inherent in job descriptions and hiring expectations at every level.  </p><p>We also need to build expectations regarding what security’s role is, and how it goes beyond its assigned tasks, from the top-down—among executives, boards, hiring managers, and business partners. A clear and common understanding of security’s role will make it easier to define success and the skill sets that are needed to be successful. Organizations like ASIS will assist in providing the wherewithal to support these leaders. </p><p>If we truly are security risk managers, then there must be an expectation of foundational and comprehensive risk skill sets when hiring decisions are made. There could be educational opportunities through ASIS, through global partnerships with universities, and through publications coordinated with organizations that reach the C-suite, such as the Conference Board of the National Association of Corporate Directors.</p><p>Clearly academia needs to play a role as well. College students interested in entering this dynamic industry will come in more prepared to assist security leaders and businesses with a solid knowledge base of security risk management fundamentals. And once a rigorous ESRM body of knowledge is established, ASIS has the clout, expertise, and standing to provide a certification for academic institutions that meet concepts in their curriculum, which would will provide for a more consistent understanding of security’s role.</p><p>ASIS has established ESRM as a global strategic priority and has formed an ESRM Commission to drive and implement this strategy. One of the commission’s first steps is developing a toolkit comprising a primer and a maturity model.</p><h4>Benefits to ASIS Members</h4><p>There is a question I ask of every can­didate I interview: “Tell me about a time when you’ve been frustrated in this industry.” </p><p>Every answer comes down to one of two issues. One, we do not know and cannot clearly define our role. Two, our business partners cannot clearly define our role. Both of these frustrations are manageable, and both are our fault as an industry for not establishing clarity.  This leads to strained relationships with our business partners in how we are perceived and how likely our expert guidance is to be accepted.</p><p>Having a clearly defined security role through ESRM helps build a foundation for a more satisfying career in the security industry. It would provide us with proper standing in our enterprises, and better positioning for us to have a seat at the table for the right reasons, ones that executives understand and can support.</p><p>For the practitioner, a consistent security program through ESRM provides a framework to bring together security mitigation tasks under one proper umbrella: physical, investigations, cyber, information, business continuity, brand protection, and more. </p><p>The human resources industry has professionalized over the last decade or so. We see this through their standing within business, their seat at the table, and their upgrades in title and pay. Now, with the rise in threats and potential business disrupters, our industry has an opportunity. Business leaders and boards are looking for answers.  We have the necessary skill sets and a dedicated and supportive professional association in ASIS to take the lead.</p><p>We are at a crossroads.  It is time to choose the path of self-determination, take control of this conversation, and make the transition from trade to profession.</p><p><em>Brian J. Allen, Esq., CPP, is the former Chief Security Officer for Time Warner Cable, a former member of the ASIS Board of Directors, and a current member of the ASIS ESRM Commission. ​</em><br></p>GP0|#28ae3eb9-d865-484b-ac9f-3dfacb4ce997;L0|#028ae3eb9-d865-484b-ac9f-3dfacb4ce997|Strategic Security;GTSet|#8accba12-4830-47cd-9299-2b34a4344465