Category Archives: GitHub

A Single Cloud Compromise Can Feed an Army of AI Sex Bots

Organizations that get relieved of credentials to their cloud environments can quickly find themselves part of a disturbing new trend: Cybercriminals using stolen cloud credentials to operate and resell sexualized AI-powered chat services. Researchers say these illicit chat bots, which use custom jailbreaks to bypass content filtering, often veer into darker role-playing scenarios, including child sexual exploitation and rape.

Image: Shutterstock.

Researchers at security firm Permiso Security say attacks against generative artificial intelligence (AI) infrastructure like Bedrock from Amazon Web Services (AWS) have increased markedly over the last six months, particularly when someone in the organization accidentally exposes their cloud credentials or key online, such as in a code repository like GitHub.

Investigating the abuse of AWS accounts for several organizations, Permiso found attackers had seized on stolen AWS credentials to interact with the large language models (LLMs) available on Bedrock. But they also soon discovered none of these AWS users had enabled logging (it is off by default), and thus they lacked any visibility into what attackers were doing with that access.

So Permiso researchers decided to leak their own test AWS key on GitHub, while turning on logging so that they could see exactly what an attacker might ask for, and what the responses might be.

Within minutes, their bait key was scooped up and used to power a service that offers AI-powered sex chats online.

“After reviewing the prompts and responses it became clear that the attacker was hosting an AI roleplaying service that leverages common jailbreak techniques to get the models to accept and respond with content that would normally be blocked,” Permiso researchers wrote in a report released today.

“Almost all of the roleplaying was of a sexual nature, with some of the content straying into darker topics such as child sexual abuse,” they continued. “Over the course of two days we saw over 75,000 successful model invocations, almost all of a sexual nature.”

Ian Ahl, senior vice president of threat research at Permiso, said attackers in possession of a working cloud account traditionally have used that access for run-of-the-mill financial cybercrime, such as cryptocurrency mining or spam. But over the past six months, Ahl said, Bedrock has emerged as one of the top targeted cloud services.

“Bad guy hosts a chat service, and subscribers pay them money,” Ahl said of the business model for commandeering Bedrock access to power sex chat bots. “They don’t want to pay for all the prompting that their subscribers are doing, so instead they hijack someone else’s infrastructure.”

Ahl said much of the AI-powered chat conversations initiated by the users of their honeypot AWS key were harmless roleplaying of sexual behavior.

“But a percentage of it is also geared toward very illegal stuff, like child sexual assault fantasies and rapes being played out,” Ahl said. “And these are typically things the large language models won’t be able to talk about.”

AWS’s Bedrock uses large language models from Anthropic, which incorporates a number of technical restrictions aimed at placing certain ethical guardrails on the use of their LLMs. But attackers can evade or “jailbreak” their way out of these restricted settings, usually by asking the AI to imagine itself in an elaborate hypothetical situation under which its normal restrictions might be relaxed or discarded altogether.

“A typical jailbreak will pose a very specific scenario, like you’re a writer who’s doing research for a book, and everyone involved is a consenting adult, even though they often end up chatting about nonconsensual things,” Ahl said.

In June 2024, security experts at Sysdig documented a new attack that leveraged stolen cloud credentials to target ten cloud-hosted LLMs. The attackers Sysdig wrote about gathered cloud credentials through a known security vulnerability, but the researchers also found the attackers sold LLM access to other cybercriminals while sticking the cloud account owner with an astronomical bill.

“Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers: in this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted,” Sysdig researchers wrote. “If undiscovered, this type of attack could result in over $46,000 of LLM consumption costs per day for the victim.”

Ahl said it’s not certain who is responsible for operating and selling these sex chat services, but Permiso suspects the activity may be tied to a platform cheekily named “chub[.]ai,” which offers a broad selection of pre-made AI characters with whom users can strike up a conversation. Permiso said almost every character name from the prompts they captured in their honeypot could be found at Chub.

Some of the AI chat bot characters offered by Chub. Some of these characters include the tags “rape” and “incest.”

Chub offers free registration, via its website or a mobile app. But after a few minutes of chatting with their newfound AI friends, users are asked to purchase a subscription. The site’s homepage features a banner at the top that strongly suggests the service is reselling access to existing cloud accounts. It reads: “Banned from OpenAI? Get unmetered access to uncensored alternatives for as little as $5 a month.”

Until late last week Chub offered a wide selection of characters in a category called “NSFL” or Not Safe for Life, a term meant to describe content that is disturbing or nauseating to the point of being emotionally scarring.

Fortune profiled Chub AI in a January 2024 story that described the service as a virtual brothel advertised by illustrated girls in spaghetti strap dresses who promise a chat-based “world without feminism,” where “girls offer sexual services.” From that piece:

Chub AI offers more than 500 such scenarios, and a growing number of other sites are enabling similar AI-powered child pornographic role-play. They are part of a broader uncensored AI economy that, according to Fortune’s interviews with 18 AI developers and founders, was spurred first by OpenAI and then accelerated by Meta’s release of its open-source Llama tool.

Fortune says Chub is run by someone using the handle “Lore,” who said they launched the service to help others evade content restrictions on AI platforms. Chub charges fees starting at $5 a month to use the new chatbots, and the founder told Fortune the site had generated more than $1 million in annualized revenue.

KrebsOnSecurity sought comment about Permiso’s research from AWS, which initially seemed to downplay the seriousness of the researchers’ findings. The company noted that AWS employs automated systems that will alert customers if their credentials or keys are found exposed online.

AWS explained that when a key or credential pair is flagged as exposed, it is then restricted to limit the amount of abuse that attackers can potentially commit with that access. For example, flagged credentials can’t be used to create or modify authorized accounts, or spin up new cloud resources.

Ahl said Permiso did indeed receive multiple alerts from AWS about their exposed key, including one that warned their account may have been used by an unauthorized party. But they said the restrictions AWS placed on the exposed key did nothing to stop the attackers from using it to abuse Bedrock services.

Sometime in the past few days, however, AWS responded by including Bedrock in the list of services that will be quarantined in the event an AWS key or credential pair is found compromised or exposed online. AWS confirmed that Bedrock was a new addition to its quarantine procedures.

Additionally, not long after KrebsOnSecurity began reporting this story, Chub’s website removed its NSFL section. It also appears to have removed cached copies of the site from the Wayback Machine at archive.org. Still, Permiso found that Chub’s user stats page shows the site has more than 3,000 AI conversation bots with the NSFL tag, and that 2,113 accounts were following the NSFL tag.

The user stats page at Chub shows more than 2,113 people have subscribed to its AI conversation bots with the “Not Safe for Life” designation.

Permiso said their entire two-day experiment generated a $3,500 bill from AWS. Some of that cost was tied to the 75,000 LLM invocations caused by the sex chat service that hijacked their key. But they said the remaining cost was a result of turning on LLM prompt logging, which is not on by default and can get expensive very quickly.

Which may explain why none of Permiso’s clients had that type of logging enabled. Paradoxically, Permiso found that while enabling these logs is the only way to know for sure how crooks might be using a stolen key, the cybercriminals who are reselling stolen or exposed AWS credentials for sex chats have started including programmatic checks in their code to ensure they aren’t using AWS keys that have prompt logging enabled.

“Enabling logging is actually a deterrent to these attackers because they are immediately checking to see if you have logging on,” Ahl said. “At least some of these guys will totally ignore those accounts, because they don’t want anyone to see what they’re doing.”

In a statement shared with KrebsOnSecurity, AWS said its services are operating securely, as designed, and that no customer action is needed. Here is their statement:

“AWS services are operating securely, as designed, and no customer action is needed. The researchers devised a testing scenario that deliberately disregarded security best practices to test what may happen in a very specific scenario. No customers were put at risk. To carry out this research, security researchers ignored fundamental security best practices and publicly shared an access key on the internet to observe what would happen.”

“AWS, nonetheless, quickly and automatically identified the exposure and notified the researchers, who opted not to take action. We then identified suspected compromised activity and took additional action to further restrict the account, which stopped this abuse. We recommend customers follow security best practices, such as protecting their access keys and avoiding the use of long-term keys to the extent possible. We thank Permiso Security for engaging AWS Security.”

AWS said customers can configure model invocation logging to collect Bedrock invocation logs, model input data, and model output data for all invocations in the AWS account used in Amazon Bedrock. Customers can also use CloudTrail to monitor Amazon Bedrock API calls.

The company said AWS customers also can use services such as GuardDuty to detect potential security concerns and Billing Alarms to provide notifications of abnormal billing activity. Finally, AWS Cost Explorer is intended to give customers a way to visualize and manage Bedrock costs and usage over time.

Anthropic told KrebsOnSecurity it is always working on novel techniques to make its models more resistant to the kind of jailbreaks.

“We remain committed to implementing strict policies and advanced techniques to protect users, as well as publishing our own research so that other AI developers can learn from it,” Anthropic said in an emailed statement. “We appreciate the research community’s efforts in highlighting potential vulnerabilities.”

Anthropic said it uses feedback from child safety experts at Thorn around signals often seen in child grooming to update its classifiers, enhance its usage policies, fine tune its models, and incorporate those signals into testing of future models.

A Single Cloud Compromise Can Feed an Army of AI Sex Bots

Organizations that get relieved of credentials to their cloud environments can quickly find themselves part of a disturbing new trend: Cybercriminals using stolen cloud credentials to operate and resell sexualized AI-powered chat services. Researchers say these illicit chat bots, which use custom jailbreaks to bypass content filtering, often veer into darker role-playing scenarios, including child sexual exploitation and rape.

Image: Shutterstock.

Researchers at security firm Permiso Security say attacks against generative artificial intelligence (AI) infrastructure like Bedrock from Amazon Web Services (AWS) have increased markedly over the last six months, particularly when someone in the organization accidentally exposes their cloud credentials or key online, such as in a code repository like GitHub.

Investigating the abuse of AWS accounts for several organizations, Permiso found attackers had seized on stolen AWS credentials to interact with the large language models (LLMs) available on Bedrock. But they also soon discovered none of these AWS users had enabled logging (it is off by default), and thus they lacked any visibility into what attackers were doing with that access.

So Permiso researchers decided to leak their own test AWS key on GitHub, while turning on logging so that they could see exactly what an attacker might ask for, and what the responses might be.

Within minutes, their bait key was scooped up and used to power a service that offers AI-powered sex chats online.

“After reviewing the prompts and responses it became clear that the attacker was hosting an AI roleplaying service that leverages common jailbreak techniques to get the models to accept and respond with content that would normally be blocked,” Permiso researchers wrote in a report released today.

“Almost all of the roleplaying was of a sexual nature, with some of the content straying into darker topics such as child sexual abuse,” they continued. “Over the course of two days we saw over 75,000 successful model invocations, almost all of a sexual nature.”

Ian Ahl, senior vice president of threat research at Permiso, said attackers in possession of a working cloud account traditionally have used that access for run-of-the-mill financial cybercrime, such as cryptocurrency mining or spam. But over the past six months, Ahl said, Bedrock has emerged as one of the top targeted cloud services.

“Bad guy hosts a chat service, and subscribers pay them money,” Ahl said of the business model for commandeering Bedrock access to power sex chat bots. “They don’t want to pay for all the prompting that their subscribers are doing, so instead they hijack someone else’s infrastructure.”

Ahl said much of the AI-powered chat conversations initiated by the users of their honeypot AWS key were harmless roleplaying of sexual behavior.

“But a percentage of it is also geared toward very illegal stuff, like child sexual assault fantasies and rapes being played out,” Ahl said. “And these are typically things the large language models won’t be able to talk about.”

AWS’s Bedrock uses large language models from Anthropic, which incorporates a number of technical restrictions aimed at placing certain ethical guardrails on the use of their LLMs. But attackers can evade or “jailbreak” their way out of these restricted settings, usually by asking the AI to imagine itself in an elaborate hypothetical situation under which its normal restrictions might be relaxed or discarded altogether.

“A typical jailbreak will pose a very specific scenario, like you’re a writer who’s doing research for a book, and everyone involved is a consenting adult, even though they often end up chatting about nonconsensual things,” Ahl said.

In June 2024, security experts at Sysdig documented a new attack that leveraged stolen cloud credentials to target ten cloud-hosted LLMs. The attackers Sysdig wrote about gathered cloud credentials through a known security vulnerability, but the researchers also found the attackers sold LLM access to other cybercriminals while sticking the cloud account owner with an astronomical bill.

“Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers: in this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted,” Sysdig researchers wrote. “If undiscovered, this type of attack could result in over $46,000 of LLM consumption costs per day for the victim.”

Ahl said it’s not certain who is responsible for operating and selling these sex chat services, but Permiso suspects the activity may be tied to a platform cheekily named “chub[.]ai,” which offers a broad selection of pre-made AI characters with whom users can strike up a conversation. Permiso said almost every character name from the prompts they captured in their honeypot could be found at Chub.

Some of the AI chat bot characters offered by Chub. Some of these characters include the tags “rape” and “incest.”

Chub offers free registration, via its website or a mobile app. But after a few minutes of chatting with their newfound AI friends, users are asked to purchase a subscription. The site’s homepage features a banner at the top that strongly suggests the service is reselling access to existing cloud accounts. It reads: “Banned from OpenAI? Get unmetered access to uncensored alternatives for as little as $5 a month.”

Until late last week Chub offered a wide selection of characters in a category called “NSFL” or Not Safe for Life, a term meant to describe content that is disturbing or nauseating to the point of being emotionally scarring.

Fortune profiled Chub AI in a January 2024 story that described the service as a virtual brothel advertised by illustrated girls in spaghetti strap dresses who promise a chat-based “world without feminism,” where “girls offer sexual services.” From that piece:

Chub AI offers more than 500 such scenarios, and a growing number of other sites are enabling similar AI-powered child pornographic role-play. They are part of a broader uncensored AI economy that, according to Fortune’s interviews with 18 AI developers and founders, was spurred first by OpenAI and then accelerated by Meta’s release of its open-source Llama tool.

Fortune says Chub is run by someone using the handle “Lore,” who said they launched the service to help others evade content restrictions on AI platforms. Chub charges fees starting at $5 a month to use the new chatbots, and the founder told Fortune the site had generated more than $1 million in annualized revenue.

KrebsOnSecurity sought comment about Permiso’s research from AWS, which initially seemed to downplay the seriousness of the researchers’ findings. The company noted that AWS employs automated systems that will alert customers if their credentials or keys are found exposed online.

AWS explained that when a key or credential pair is flagged as exposed, it is then restricted to limit the amount of abuse that attackers can potentially commit with that access. For example, flagged credentials can’t be used to create or modify authorized accounts, or spin up new cloud resources.

Ahl said Permiso did indeed receive multiple alerts from AWS about their exposed key, including one that warned their account may have been used by an unauthorized party. But they said the restrictions AWS placed on the exposed key did nothing to stop the attackers from using it to abuse Bedrock services.

Sometime in the past few days, however, AWS responded by including Bedrock in the list of services that will be quarantined in the event an AWS key or credential pair is found compromised or exposed online. AWS confirmed that Bedrock was a new addition to its quarantine procedures.

Additionally, not long after KrebsOnSecurity began reporting this story, Chub’s website removed its NSFL section. It also appears to have removed cached copies of the site from the Wayback Machine at archive.org. Still, Permiso found that Chub’s user stats page shows the site has more than 3,000 AI conversation bots with the NSFL tag, and that 2,113 accounts were following the NSFL tag.

The user stats page at Chub shows more than 2,113 people have subscribed to its AI conversation bots with the “Not Safe for Life” designation.

Permiso said their entire two-day experiment generated a $3,500 bill from AWS. Some of that cost was tied to the 75,000 LLM invocations caused by the sex chat service that hijacked their key. But they said the remaining cost was a result of turning on LLM prompt logging, which is not on by default and can get expensive very quickly.

Which may explain why none of Permiso’s clients had that type of logging enabled. Paradoxically, Permiso found that while enabling these logs is the only way to know for sure how crooks might be using a stolen key, the cybercriminals who are reselling stolen or exposed AWS credentials for sex chats have started including programmatic checks in their code to ensure they aren’t using AWS keys that have prompt logging enabled.

“Enabling logging is actually a deterrent to these attackers because they are immediately checking to see if you have logging on,” Ahl said. “At least some of these guys will totally ignore those accounts, because they don’t want anyone to see what they’re doing.”

In a statement shared with KrebsOnSecurity, AWS said its services are operating securely, as designed, and that no customer action is needed. Here is their statement:

“AWS services are operating securely, as designed, and no customer action is needed. The researchers devised a testing scenario that deliberately disregarded security best practices to test what may happen in a very specific scenario. No customers were put at risk. To carry out this research, security researchers ignored fundamental security best practices and publicly shared an access key on the internet to observe what would happen.”

“AWS, nonetheless, quickly and automatically identified the exposure and notified the researchers, who opted not to take action. We then identified suspected compromised activity and took additional action to further restrict the account, which stopped this abuse. We recommend customers follow security best practices, such as protecting their access keys and avoiding the use of long-term keys to the extent possible. We thank Permiso Security for engaging AWS Security.”

AWS said customers can configure model invocation logging to collect Bedrock invocation logs, model input data, and model output data for all invocations in the AWS account used in Amazon Bedrock. Customers can also use CloudTrail to monitor Amazon Bedrock API calls.

The company said AWS customers also can use services such as GuardDuty to detect potential security concerns and Billing Alarms to provide notifications of abnormal billing activity. Finally, AWS Cost Explorer is intended to give customers a way to visualize and manage Bedrock costs and usage over time.

Anthropic told KrebsOnSecurity it is always working on novel techniques to make its models more resistant to the kind of jailbreaks.

“We remain committed to implementing strict policies and advanced techniques to protect users, as well as publishing our own research so that other AI developers can learn from it,” Anthropic said in an emailed statement. “We appreciate the research community’s efforts in highlighting potential vulnerabilities.”

Anthropic said it uses feedback from child safety experts at Thorn around signals often seen in child grooming to update its classifiers, enhance its usage policies, fine tune its models, and incorporate those signals into testing of future models.

A Single Cloud Compromise Can Feed an Army of AI Sex Bots

Organizations that get relieved of credentials to their cloud environments can quickly find themselves part of a disturbing new trend: Cybercriminals using stolen cloud credentials to operate and resell sexualized AI-powered chat services. Researchers say these illicit chat bots, which use custom jailbreaks to bypass content filtering, often veer into darker role-playing scenarios, including child sexual exploitation and rape.

Image: Shutterstock.

Researchers at security firm Permiso Security say attacks against generative artificial intelligence (AI) infrastructure like Bedrock from Amazon Web Services (AWS) have increased markedly over the last six months, particularly when someone in the organization accidentally exposes their cloud credentials or key online, such as in a code repository like GitHub.

Investigating the abuse of AWS accounts for several organizations, Permiso found attackers had seized on stolen AWS credentials to interact with the large language models (LLMs) available on Bedrock. But they also soon discovered none of these AWS users had enabled logging (it is off by default), and thus they lacked any visibility into what attackers were doing with that access.

So Permiso researchers decided to leak their own test AWS key on GitHub, while turning on logging so that they could see exactly what an attacker might ask for, and what the responses might be.

Within minutes, their bait key was scooped up and used to power a service that offers AI-powered sex chats online.

“After reviewing the prompts and responses it became clear that the attacker was hosting an AI roleplaying service that leverages common jailbreak techniques to get the models to accept and respond with content that would normally be blocked,” Permiso researchers wrote in a report released today.

“Almost all of the roleplaying was of a sexual nature, with some of the content straying into darker topics such as child sexual abuse,” they continued. “Over the course of two days we saw over 75,000 successful model invocations, almost all of a sexual nature.”

Ian Ahl, senior vice president of threat research at Permiso, said attackers in possession of a working cloud account traditionally have used that access for run-of-the-mill financial cybercrime, such as cryptocurrency mining or spam. But over the past six months, Ahl said, Bedrock has emerged as one of the top targeted cloud services.

“Bad guy hosts a chat service, and subscribers pay them money,” Ahl said of the business model for commandeering Bedrock access to power sex chat bots. “They don’t want to pay for all the prompting that their subscribers are doing, so instead they hijack someone else’s infrastructure.”

Ahl said much of the AI-powered chat conversations initiated by the users of their honeypot AWS key were harmless roleplaying of sexual behavior.

“But a percentage of it is also geared toward very illegal stuff, like child sexual assault fantasies and rapes being played out,” Ahl said. “And these are typically things the large language models won’t be able to talk about.”

AWS’s Bedrock uses large language models from Anthropic, which incorporates a number of technical restrictions aimed at placing certain ethical guardrails on the use of their LLMs. But attackers can evade or “jailbreak” their way out of these restricted settings, usually by asking the AI to imagine itself in an elaborate hypothetical situation under which its normal restrictions might be relaxed or discarded altogether.

“A typical jailbreak will pose a very specific scenario, like you’re a writer who’s doing research for a book, and everyone involved is a consenting adult, even though they often end up chatting about nonconsensual things,” Ahl said.

In June 2024, security experts at Sysdig documented a new attack that leveraged stolen cloud credentials to target ten cloud-hosted LLMs. The attackers Sysdig wrote about gathered cloud credentials through a known security vulnerability, but the researchers also found the attackers sold LLM access to other cybercriminals while sticking the cloud account owner with an astronomical bill.

“Once initial access was obtained, they exfiltrated cloud credentials and gained access to the cloud environment, where they attempted to access local LLM models hosted by cloud providers: in this instance, a local Claude (v2/v3) LLM model from Anthropic was targeted,” Sysdig researchers wrote. “If undiscovered, this type of attack could result in over $46,000 of LLM consumption costs per day for the victim.”

Ahl said it’s not certain who is responsible for operating and selling these sex chat services, but Permiso suspects the activity may be tied to a platform cheekily named “chub[.]ai,” which offers a broad selection of pre-made AI characters with whom users can strike up a conversation. Permiso said almost every character name from the prompts they captured in their honeypot could be found at Chub.

Some of the AI chat bot characters offered by Chub. Some of these characters include the tags “rape” and “incest.”

Chub offers free registration, via its website or a mobile app. But after a few minutes of chatting with their newfound AI friends, users are asked to purchase a subscription. The site’s homepage features a banner at the top that strongly suggests the service is reselling access to existing cloud accounts. It reads: “Banned from OpenAI? Get unmetered access to uncensored alternatives for as little as $5 a month.”

Until late last week Chub offered a wide selection of characters in a category called “NSFL” or Not Safe for Life, a term meant to describe content that is disturbing or nauseating to the point of being emotionally scarring.

Fortune profiled Chub AI in a January 2024 story that described the service as a virtual brothel advertised by illustrated girls in spaghetti strap dresses who promise a chat-based “world without feminism,” where “girls offer sexual services.” From that piece:

Chub AI offers more than 500 such scenarios, and a growing number of other sites are enabling similar AI-powered child pornographic role-play. They are part of a broader uncensored AI economy that, according to Fortune’s interviews with 18 AI developers and founders, was spurred first by OpenAI and then accelerated by Meta’s release of its open-source Llama tool.

Fortune says Chub is run by someone using the handle “Lore,” who said they launched the service to help others evade content restrictions on AI platforms. Chub charges fees starting at $5 a month to use the new chatbots, and the founder told Fortune the site had generated more than $1 million in annualized revenue.

KrebsOnSecurity sought comment about Permiso’s research from AWS, which initially seemed to downplay the seriousness of the researchers’ findings. The company noted that AWS employs automated systems that will alert customers if their credentials or keys are found exposed online.

AWS explained that when a key or credential pair is flagged as exposed, it is then restricted to limit the amount of abuse that attackers can potentially commit with that access. For example, flagged credentials can’t be used to create or modify authorized accounts, or spin up new cloud resources.

Ahl said Permiso did indeed receive multiple alerts from AWS about their exposed key, including one that warned their account may have been used by an unauthorized party. But they said the restrictions AWS placed on the exposed key did nothing to stop the attackers from using it to abuse Bedrock services.

Sometime in the past few days, however, AWS responded by including Bedrock in the list of services that will be quarantined in the event an AWS key or credential pair is found compromised or exposed online. AWS confirmed that Bedrock was a new addition to its quarantine procedures.

Additionally, not long after KrebsOnSecurity began reporting this story, Chub’s website removed its NSFL section. It also appears to have removed cached copies of the site from the Wayback Machine at archive.org. Still, Permiso found that Chub’s user stats page shows the site has more than 3,000 AI conversation bots with the NSFL tag, and that 2,113 accounts were following the NSFL tag.

The user stats page at Chub shows more than 2,113 people have subscribed to its AI conversation bots with the “Not Safe for Life” designation.

Permiso said their entire two-day experiment generated a $3,500 bill from AWS. Some of that cost was tied to the 75,000 LLM invocations caused by the sex chat service that hijacked their key. But they said the remaining cost was a result of turning on LLM prompt logging, which is not on by default and can get expensive very quickly.

Which may explain why none of Permiso’s clients had that type of logging enabled. Paradoxically, Permiso found that while enabling these logs is the only way to know for sure how crooks might be using a stolen key, the cybercriminals who are reselling stolen or exposed AWS credentials for sex chats have started including programmatic checks in their code to ensure they aren’t using AWS keys that have prompt logging enabled.

“Enabling logging is actually a deterrent to these attackers because they are immediately checking to see if you have logging on,” Ahl said. “At least some of these guys will totally ignore those accounts, because they don’t want anyone to see what they’re doing.”

In a statement shared with KrebsOnSecurity, AWS said its services are operating securely, as designed, and that no customer action is needed. Here is their statement:

“AWS services are operating securely, as designed, and no customer action is needed. The researchers devised a testing scenario that deliberately disregarded security best practices to test what may happen in a very specific scenario. No customers were put at risk. To carry out this research, security researchers ignored fundamental security best practices and publicly shared an access key on the internet to observe what would happen.”

“AWS, nonetheless, quickly and automatically identified the exposure and notified the researchers, who opted not to take action. We then identified suspected compromised activity and took additional action to further restrict the account, which stopped this abuse. We recommend customers follow security best practices, such as protecting their access keys and avoiding the use of long-term keys to the extent possible. We thank Permiso Security for engaging AWS Security.”

AWS said customers can configure model invocation logging to collect Bedrock invocation logs, model input data, and model output data for all invocations in the AWS account used in Amazon Bedrock. Customers can also use CloudTrail to monitor Amazon Bedrock API calls.

The company said AWS customers also can use services such as GuardDuty to detect potential security concerns and Billing Alarms to provide notifications of abnormal billing activity. Finally, AWS Cost Explorer is intended to give customers a way to visualize and manage Bedrock costs and usage over time.

Anthropic told KrebsOnSecurity it is always working on novel techniques to make its models more resistant to the kind of jailbreaks.

“We remain committed to implementing strict policies and advanced techniques to protect users, as well as publishing our own research so that other AI developers can learn from it,” Anthropic said in an emailed statement. “We appreciate the research community’s efforts in highlighting potential vulnerabilities.”

Anthropic said it uses feedback from child safety experts at Thorn around signals often seen in child grooming to update its classifiers, enhance its usage policies, fine tune its models, and incorporate those signals into testing of future models.

This Windows PowerShell Phish Has Scary Potential

Many GitHub users this week received a novel phishing email warning of critical security holes in their code. Those who clicked the link for details were asked to distinguish themselves from bots by pressing a combination of keyboard keys that causes Microsoft Windows to download password-stealing malware. While it’s unlikely that many programmers fell for this scam, it’s notable because less targeted versions of it are likely to be far more successful against the average Windows user.

A reader named Chris shared an email he received this week that spoofed GitHub’s security team and warned: “Hey there! We have detected a security vulnerability in your repository. Please contact us at https://github-scanner[.]com to get more information on how to fix this issue.”

Visiting that link generates a web page that asks the visitor to “Verify You Are Human” by solving an unusual CAPTCHA.

This malware attack pretends to be a CAPTCHA intended to separate humans from bots.

Clicking the “I’m not a robot” button generates a pop-up message asking the user to take three sequential steps to prove their humanity. Step 1 involves simultaneously pressing the keyboard key with the Windows icon and the letter “R,” which opens a Windows “Run” prompt that will execute any specified program that is already installed on the system.

Executing this series of keypresses prompts the built-in Windows Powershell to download password-stealing malware.

Step 2 asks the user to press the “Control” key and the letter “V” at the same time, which pastes malicious code from the site’s virtual clipboard.

Step 3 — pressing the “Enter” key — causes Windows to launch a PowerShell command, and then fetch and execute a malicious file from github-scanner[.]com called “l6e.exe.”

PowerShell is a powerful, cross-platform automation tool built into Windows that is designed to make it simpler for administrators to automate tasks on a PC or across multiple computers on the same network.

According to an analysis at the malware scanning service Virustotal.com, the malicious file downloaded by the pasted text is called Lumma Stealer, and it’s designed to snarf any credentials stored on the victim’s PC.

This phishing campaign may not have fooled many programmers, who no doubt natively understand that pressing the Windows and “R” keys will open up a “Run” prompt, or that Ctrl-V will dump the contents of the clipboard.

But I bet the same approach would work just fine to trick some of my less tech-savvy friends and relatives into running malware on their PCs. I’d also bet none of these people have ever heard of PowerShell, let alone had occasion to intentionally launch a PowerShell terminal.

Given those realities, it would be nice if there were a simple way to disable or at least heavily restrict PowerShell for normal end users for whom it could become more of a liability.

However, Microsoft strongly advises against nixing PowerShell because some core system processes and tasks may not function properly without it. What’s more, doing so requires tinkering with sensitive settings in the Windows registry, which can be a dicey undertaking even for the learned.

Still, it wouldn’t hurt to share this article with the Windows users in your life who fit the less-savvy profile. Because this particular scam has a great deal of room for growth and creativity.

Episode 248: GitHub’s Jill Moné-Corallo on Product Security And Supply Chain Threats

In this episode of the Security Ledger Podcast, Paul speaks with Jill Moné-Corallo, the Director of Product Security Engineering Response at GitHub. Jill talks about her journey from a college stint working at Apple’s Genius bar, to the information security space - first at product security at Apple and now at GitHub, a massive development platform that is increasingly in the crosshairs of sophisticated cyber criminals and nation-state actors.

The post Episode 248: GitHub’s Jill Moné-Corallo on Product Security And Supply Chain Threats appeared first on The Security Ledger with Paul F. Roberts.

Fighting Fake EDRs With ‘Credit Ratings’ for Police

When KrebsOnSecurity recently explored how cybercriminals were using hacked email accounts at police departments worldwide to obtain warrantless Emergency Data Requests (EDRs) from social media firms and technology providers, many security experts called it a fundamentally unfixable problem. But don’t tell that to Matt Donahue, a former FBI agent who recently quit the agency to launch a startup that aims to help tech companies do a better job screening out phony law enforcement data requests — in part by assigning trustworthiness or “credit ratings” to law enforcement authorities worldwide.

A sample Kodex dashboard. Image: Kodex.us.

Donahue is co-founder of Kodex, a company formed in February 2021 that builds security portals designed to help tech companies “manage information requests from government agencies who contact them, and to securely transfer data & collaborate against abuses on their platform.”

The 30-year-old Donahue said he left the FBI in April 2020 to start Kodex because it was clear that social media and technology companies needed help validating the increasingly large number of law enforcement requests domestically and internationally.

“So much of this is such an antiquated, manual process,” Donahue said of his perspective gained at the FBI. “In a lot of cases we’re still sending faxes when more secure and expedient technologies exist.”

Donahue said when he brought the subject up with his superiors at the FBI, they would kind of shrug it off, as if to say, “This is how it’s done and there’s no changing it.”

“My bosses told me I was committing career suicide doing this, but I genuinely believe fixing this process will do more for national security than a 20-year career at the FBI,” he said. “This is such a bigger problem than people give it credit for, and that’s why I left the bureau to start this company.”

One of the stated goals of Kodex is to build a scoring or reputation system for law enforcement personnel who make these data requests. After all, there are tens of thousands of police jurisdictions around the world — including roughly 18,000 in the United States alone — and all it takes for hackers to abuse the EDR process is illicit access to a single police email account.

Kodex is trying to tackle the problem of fake EDRs by working directly with the data providers to pool information about police or government officials submitting these requests, and hopefully making it easier for all customers to spot an unauthorized EDR.

Kodex’s first big client was cryptocurrency giant Coinbase, which confirmed their partnership but otherwise declined to comment for this story. Twilio confirmed it uses Kodex’s technology for law enforcement requests destined for any of its business units, but likewise declined to comment further.

Within their own separate Kodex portals, Twilio can’t see requests submitted to Coinbase, or vice versa. But each can see if a law enforcement entity or individual tied to one of their own requests has ever submitted a request to a different Kodex client, and then drill down further into other data about the submitter, such as Internet address(es) used, and the age of the requestor’s email address.

Donahue said in Kodex’s system, each law enforcement entity is assigned a credit rating, wherein officials who have a long history of sending valid legal requests will have a higher rating than someone sending an EDR for the first time.

“In those cases, we warn the customer with a flash on the request when it pops up that we’re allowing this to come through because the email was verified [as being sent from a valid police or government domain name], but we’re trying to verify the emergency situation for you, and we will change that rating once we get new information about the emergency,” Donahue said.

“This way, even if one customer gets a fake request, we’re able to prevent it from happening to someone else,” he continued. “In a lot of cases with fake EDRs, you can see the same email [address] being used to message different companies for data. And that’s the problem: So many companies are operating in their own silos and are not able to share information about what they’re seeing, which is why we’re seeing scammers exploit this good faith process of EDRs.”

NEEDLES IN THE HAYSTACK

As social media and technology platforms have grown over the years, so have the volumes of requests from law enforcement agencies worldwide for user data. For example, in its latest transparency report mobile giant Verizon reported receiving 114,000 data requests of all types from U.S. law enforcement entities in the second half of 2021.

Verizon said approximately 35,000 of those requests (~30 percent) were EDRs, and that it provided data in roughly 91 percent of those cases. The company doesn’t disclose how many EDRs came from foreign law enforcement entities during that same time period. Verizon currently asks law enforcement officials to send these requests via fax.

Validating legal requests by domain name may be fine for data demands that include documents like subpoenas and search warrants, which can be validated with the courts. But not so for EDRs, which largely bypass any official review and do not require the requestor to submit any court-approved documents.

Police and government authorities can legitimately request EDRs to learn the whereabouts or identities of people who have posted online about plans to harm themselves or others, or in other exigent circumstances such as a child abduction or abuse, or a potential terrorist attack.

But as KrebsOnSecurity reported in March, it is now clear that crooks have figured out there is no quick and easy way for a company that receives one of these EDRs to know whether it is legitimate. Using illicit access to hacked police email accounts, the attackers will send a fake EDR along with an attestation that innocent people will likely suffer greatly or die unless the requested data is provided immediately.

In this scenario, the receiving company finds itself caught between two unsavory outcomes: Failing to immediately comply with an EDR — and potentially having someone’s blood on their hands — or possibly leaking a customer record to the wrong person. That might explain why the compliance rate for EDRs is usually quite high — often upwards of 90 percent.

Fake EDRs have become such a reliable method in the cybercrime underground for obtaining information about account holders that several cybercriminals have started offering services that will submit these fraudulent EDRs on behalf of paying clients to a number of top social media and technology firms.

A fake EDR service advertised on a hacker forum in 2021.

An individual who’s part of the community of crooks that are abusing fake EDR told KrebsOnSecurity the schemes often involve hacking into police department emails by first compromising the agency’s website. From there, they can drop a backdoor “shell” on the server to secure permanent access, and then create new email accounts within the hacked organization.

In other cases, hackers will try to guess the passwords of police department email systems. In these attacks, the hackers will identify email addresses associated with law enforcement personnel, and then attempt to authenticate using passwords those individuals have used at other websites that have been breached previously.

EDR OVERLOAD?

Donahue said depending on the industry, EDRs make up between 5 percent and 30 percent of the total volume of requests. In contrast, he said, EDRs amount to less than three percent of the requests sent through Kodex portals used by customers.

KrebsOnSecurity sought to verify those numbers by compiling EDR statistics based on annual or semi-annual transparency reports from some of the largest technology and social media firms. While there are no available figures on the number of fake EDRs each provider is receiving each year, those phony requests can easily hide amid an increasingly heavy torrent of legitimate demands.

Meta/Facebook says roughly 11 percent of all law enforcement data requests — 21,700 of them — were EDRs in the first half of 2021. Almost 80 percent of the time the company produced at least some data in response. Facebook has long used its own online portal where law enforcement officials must first register before submitting requests.

Government data requests, including EDRs, received by Facebook over the years. Image: Meta Transparency Report.

Apple said it received 1,162 emergency requests for data in the last reporting period it made public — July – December 2020. Apple’s compliance with EDRs was 93 percent worldwide in 2020. Apple’s website says it accepts EDRs via email, after applicants have filled out a supplied PDF form. [As a lifelong Apple user and customer, I was floored to learn that the richest company in the world — which for several years has banked heavily on privacy and security promises to customers — still relies on email for such sensitive requests].

Twitter says it received 1,860 EDRs in the first half of 2021, or roughly 15 percent of the global information requests sent to Twitter. Twitter accepts EDRs via an interactive form on the company’s website. Twitter reports that EDRs decreased by 25% during this reporting period, while the aggregate number of accounts specified in these requests decreased by 15%. The United States submitted the highest volume of global emergency requests (36%), followed by Japan (19%), and India (12%).

Discord reported receiving 378 requests for emergency data disclosure in the first half of 2021. Discord accepts EDRs via a specified email address.

For the six months ending in December 2021, Snapchat said it received 2,085 EDRs from authorities in the United States (with a 59 percent compliance rate), and another 1,448 from international police (64 percent granted). Snapchat has a form for submitting EDRs on its website.

TikTok‘s resources on government data requests currently lead to a “Page not found” error, but a company spokesperson said TikTok received 715 EDRs in the first half of 2021. That’s up from 409 EDRs in the previous six months. Tiktok handles EDRs via a form on its website.

The current transparency reports for both Google and Microsoft do not break out EDRs by category. Microsoft says that in the second half of 2021 it received more than 25,000 government requests, and that it complied at least partly with those requests more than 90 percent of the time.

Microsoft runs its own portal that law enforcement officials must register at to submit legal requests, but that portal doesn’t accept requests for other Microsoft properties, such as LinkedIn or Github.

Google said it received more than 113,000 government requests for user data in the last half of 2020, and that about 76 percent of the requests resulted in the disclosure of some user information. Google doesn’t publish EDR numbers, and it did not respond to requests for those figures. Google also runs its own portal for accepting law enforcement data requests.

Verizon reports (PDF) receiving more than 35,000 EDRs from just U.S. law enforcement in the second half of 2021, out of a total of 114,000 law enforcement requests (Verizon doesn’t disclose how many EDRs came from foreign law enforcement entities). Verizon said it complied with approximately 91 percent of requests. The company accepts law enforcement requests via snail mail or fax.

Image: Verizon.com.

AT&T says (PDF) it received nearly 19,000 EDRs in the second half of 2021; it provided some data roughly 95 percent of the time. AT&T requires EDRs to be faxed.

The most recent transparency report published by T-Mobile says the company received more than 164,000 “emergency/911” requests in 2020 — but it does not specifically call out EDRs. Like its old school telco brethren, T-Mobile requires EDRs to be faxed. T-Mobile did not respond to requests for more information.

Data from T-Mobile’s most recent transparency report in 2020. Image: T-Mobile.

Pro-Ukraine ‘Protestware’ Pushes Antiwar Ads, Geo-Targeted Malware

Researchers are tracking a number of open-source “protestware” projects on GitHub that have recently altered their code to display “Stand with Ukraine” messages for users, or basic facts about the carnage in Ukraine. The group also is tracking several code packages that were recently modified to erase files on computers that appear to be coming from Russian or Belarusian Internet addresses.

The upstart tracking effort is being crowdsourced via Telegram, but the output of the Russian research group is centralized in a Google Spreadsheet that is open to the public. Most of the GitHub code repositories tracked by this group include relatively harmless components that will either display a simple message in support of Ukraine, or show statistics about the war in Ukraine — such as casualty numbers — and links to more information on the Deep Web.

For example, the popular library ES5-ext hadn’t updated its code in nearly two years. But on March 7, the code project added a component “postinstall.js,” which checks to see if the user’s computer is tied to a Russian Internet address. If so, the code broadcasts a “Call for peace:”

A message that appears for Russian users of the popular es5-ext code library on GitHub. The message has been Google-Translated from Russian to English.

A more concerning example can be found at the GitHub page for “vue-cli,” a popular Javascript framework for building web-based user interfaces. On March 15, users discovered a new component had been added that was designed to wipe all files from any systems visiting from a Russian or Belarusian Internet address (the malicious code has since been removed):

Readers complaining that an update to the popular Vue-Cli package sought to wipe files if the user was coming from a Russian IP address.

“Man, I love politics in my APIs,” GitHub user “MSchleckser” commented wryly on Mar. 15.

The crowdsourced effort also blacklisted a code library called “PeaceNotWar” maintained by GitHub user RIAEvangelist.

“This code serves as a non-destructive example of why controlling your node modules is important,” RIAEvangelist wrote. “It also serves as a non-violent protest against Russia’s aggression that threatens the world right now. This module will add a message of peace on your users’ desktops, and it will only do it if it does not already exist just to be polite. To include this module in your code, just run npm i peacenotwar in your code’s directory or module root.”

Alex Holden is a native Ukrainian who runs the Minneapolis-based cyber intelligence firm Hold Security. Holden said the real trouble starts when protestware is included in code packages that get automatically fetched by a myriad of third-party software products. Holden said some of the code projects tracked by the Russian research group are maintained by Ukrainian software developers.

“Ukrainian and sadly non-Ukrainian developers are modifying their public software to trigger malware or pro-Ukraine ads when deployed on Russian computers,” Holden said. “And we see this effort, which is the Russians trying to defend against that.”

Commenting on the malicious code added to the “Vue-cli” application, GitHub user “nm17” said a continued expansion of protestware would erode public trust in open-source software.

“The Pandora’s box is now opened, and from this point on, people who use opensource will experience xenophobia more than ever before, EVERYONE included,” NM17 wrote. “The trust factor of open source, which was based on good will of the developers is now practically gone, and now, more and more people are realizing that one day, their library/application can possibly be exploited to do/say whatever some random dev on the internet thought ‘was the right thing they to do.’ Not a single good came out of this ‘protest.'”

GoDaddy Employees Used in Attacks on Multiple Cryptocurrency Services

Fraudsters redirected email and web traffic destined for several cryptocurrency trading platforms over the past week. The attacks were facilitated by scams targeting employees at GoDaddy, the world’s largest domain name registrar, KrebsOnSecurity has learned.

The incident is the latest incursion at GoDaddy that relied on tricking employees into transferring ownership and/or control over targeted domains to fraudsters. In March, a voice phishing scam targeting GoDaddy support employees allowed attackers to assume control over at least a half-dozen domain names, including transaction brokering site escrow.com.

And in May of this year, GoDaddy disclosed that 28,000 of its customers’ web hosting accounts were compromised following a security incident in Oct. 2019 that wasn’t discovered until April 2020.

This latest campaign appears to have begun on or around Nov. 13, with an attack on cryptocurrency trading platform liquid.com.

“A domain hosting provider ‘GoDaddy’ that manages one of our core domain names incorrectly transferred control of the account and domain to a malicious actor,” Liquid CEO Kayamori said in a blog post. “This gave the actor the ability to change DNS records and in turn, take control of a number of internal email accounts. In due course, the malicious actor was able to partially compromise our infrastructure, and gain access to document storage.”

In the early morning hours of Nov. 18 Central European Time (CET), cyptocurrency mining service NiceHash disccovered that some of the settings for its domain registration records at GoDaddy were changed without authorization, briefly redirecting email and web traffic for the site. NiceHash froze all customer funds for roughly 24 hours until it was able to verify that its domain settings had been changed back to their original settings.

“At this moment in time, it looks like no emails, passwords, or any personal data were accessed, but we do suggest resetting your password and activate 2FA security,” the company wrote in a blog post.

NiceHash founder Matjaz Skorjanc said the unauthorized changes were made from an Internet address at GoDaddy, and that the attackers tried to use their access to its incoming NiceHash emails to perform password resets on various third-party services, including Slack and Github. But he said GoDaddy was impossible to reach at the time because it was undergoing a widespread system outage in which phone and email systems were unresponsive.

“We detected this almost immediately [and] started to mitigate [the] attack,” Skorjanc said in an email to this author. “Luckily, we fought them off well and they did not gain access to any important service. Nothing was stolen.”

Skorjanc said NiceHash’s email service was redirected to privateemail.com, an email platform run by Namecheap Inc., another large domain name registrar. Using Farsight Security, a service which maps changes to domain name records over time, KrebsOnSecurity instructed the service to show all domains registered at GoDaddy that had alterations to their email records in the past week which pointed them to privateemail.com. Those results were then indexed against the top one million most popular websites according to Alexa.com.

The result shows that several other cryptocurrency platforms also may have been targeted by the same group, including Bibox.com, Celsius.network, and Wirex.app. None of these companies responded to requests for comment.

In response to questions from KrebsOnSecurity, GoDaddy acknowledged that “a small number” of customer domain names had been modified after a “limited” number of GoDaddy employees fell for a social engineering scam. GoDaddy said the outage between 7:00 p.m. and 11:00 p.m. PST on Nov. 17 was not related to a security incident, but rather a technical issue that materialized during planned network maintenance.

“Separately, and unrelated to the outage, a routine audit of account activity identified potential unauthorized changes to a small number of customer domains and/or account information,” GoDaddy spokesperson Dan Race said. “Our security team investigated and confirmed threat actor activity, including social engineering of a limited number of GoDaddy employees.

“We immediately locked down the accounts involved in this incident, reverted any changes that took place to accounts, and assisted affected customers with regaining access to their accounts,” GoDaddy’s statement continued. “As threat actors become increasingly sophisticated and aggressive in their attacks, we are constantly educating employees about new tactics that might be used against them and adopting new security measures to prevent future attacks.”

Race declined to specify how its employees were tricked into making the unauthorized changes, saying the matter was still under investigation. But in the attacks earlier this year that affected escrow.com and several other GoDaddy customer domains, the assailants targeted employees over the phone, and were able to read internal notes that GoDaddy employees had left on customer accounts.

What’s more, the attack on escrow.com redirected the site to an Internet address in Malaysia that hosted fewer than a dozen other domains, including the phishing website servicenow-godaddy.com. This suggests the attackers behind the March incident — and possibly this latest one — succeeded by calling GoDaddy employees and convincing them to use their employee credentials at a fraudulent GoDaddy login page.

In August 2020, KrebsOnSecurity warned about a marked increase in large corporations being targeted in sophisticated voice phishing or “vishing” scams. Experts say the success of these scams has been aided greatly by many employees working remotely thanks to the ongoing Coronavirus pandemic.

A typical vishing scam begins with a series of phone calls to employees working remotely at a targeted organization. The phishers often will explain that they’re calling from the employer’s IT department to help troubleshoot issues with the company’s email or virtual private networking (VPN) technology.

The goal is to convince the target either to divulge their credentials over the phone or to input them manually at a website set up by the attackers that mimics the organization’s corporate email or VPN portal.

On July 15, a number of high-profile Twitter accounts were used to tweet out a bitcoin scam that earned more than $100,000 in a few hours. According to Twitter, that attack succeeded because the perpetrators were able to social engineer several Twitter employees over the phone into giving away access to internal Twitter tools.

An alert issued jointly by the FBI and the Cybersecurity and Infrastructure Security Agency (CISA) says the perpetrators of these vishing attacks compile dossiers on employees at their targeted companies using mass scraping of public profiles on social media platforms, recruiter and marketing tools, publicly available background check services, and open-source research.

The FBI/CISA advisory includes a number of suggestions that companies can implement to help mitigate the threat from vishing attacks, including:

• Restrict VPN connections to managed devices only, using mechanisms like hardware checks or installed certificates, so user input alone is not enough to access the corporate VPN.

• Restrict VPN access hours, where applicable, to mitigate access outside of allowed times.

• Employ domain monitoring to track the creation of, or changes to, corporate, brand-name domains.

• Actively scan and monitor web applications for unauthorized access, modification, and anomalous activities.

• Employ the principle of least privilege and implement software restriction policies or other controls; monitor authorized user accesses and usage.

• Consider using a formalized authentication process for employee-to-employee communications made over the public telephone network where a second factor is used to
authenticate the phone call before sensitive information can be discussed.

• Improve 2FA and OTP messaging to reduce confusion about employee authentication attempts.

• Verify web links do not have misspellings or contain the wrong domain.

• Bookmark the correct corporate VPN URL and do not visit alternative URLs on the sole basis of an inbound phone call.

• Be suspicious of unsolicited phone calls, visits, or email messages from unknown individuals claiming to be from a legitimate organization. Do not provide personal information or information about your organization, including its structure or networks, unless you are certain of a person’s authority to have the information. If possible, try to verify the caller’s identity directly with the company.

• If you receive a vishing call, document the phone number of the caller as well as the domain that the actor tried to send you to and relay this information to law enforcement.

• Limit the amount of personal information you post on social networking sites. The internet is a public resource; only post information you are comfortable with anyone seeing.

• Evaluate your settings: sites may change their options periodically, so review your security and privacy settings regularly to make sure that your choices are still appropriate.

truffleHog – Search Git for High Entropy Strings with Commit History

truffleHog – Search Git for High Entropy Strings with Commit History

truffleHog is a Python-based tool to search Git for high entropy strings, digging deep into commit history and branches. This is effective at finding secrets accidentally committed.

truffleHog previously functioned by running entropy checks on git diffs. This functionality still exists, but high signal regex checks have been added, and the ability to surpress entropy checking has also been added.

truffleHog --regex --entropy=False https://github.com/dxa4481/truffleHog.git

or

truffleHog file:///user/dxa4481/codeprojects/truffleHog/

truffleHog will go through the entire commit history of each branch, and check each diff from each commit, and check for secrets.

Read the rest of truffleHog – Search Git for High Entropy Strings with Commit History now! Only available at Darknet.

Retailer Orvis.com Leaked Hundreds of Internal Passwords on Pastebin

Orvis, a Vermont-based retailer that specializes in high-end fly fishing equipment and other sporting goods, leaked hundreds of internal passwords on Pastebin.com for several weeks last month, exposing credentials the company used to manage everything from firewalls and routers to administrator accounts and database servers, KrebsOnSecurity has learned. Orvis says the exposure was inadvertent, and that many of the credentials were already expired.

Based in Sunderland, VT. and founded in 1856, privately-held Orvis is the oldest mail-order retailer in the United States. The company has approximately 1,700 employees, 69 retail stores and 10 outlets in the US, and 18 retail stores in the UK.

In late October, this author received a tip from Wisconsin-based security firm Hold Security that a file containing a staggering number of internal usernames and passwords for Orvis had been posted to Pastebin.

Reached for comment about the source of the document, Orvis spokesperson Tucker Kimball said it was only available for a day before the company had it removed from Pastebin.

“The file contains old credentials, so many of the devices associated with the credentials are decommissioned and we took steps to address the remaining ones,” Kimball said. “We are leveraging our existing security tools to conduct an investigation to determine how this occurred.”

However, according to Hold Security founder Alex Holden, this enormous passwords file was actually posted to Pastebin on two separate occasions last month, the first being on Oct. 4, and the second Oct. 22. That finding was corroborated by 4iq.com, a company that aggregates information from leaked databases online.

Orvis did not respond to follow-up requests for comment via phone and email; the last two email messages sent by KrebsOnSecurity to Orvis were returned simply as “blocked.”

It’s not unusual for employees or contractors to post bits of sensitive data to public sites like Pastebin and Github, but the credentials file apparently published by someone working at or for Orvis is by far the most extreme example I’ve ever witnessed.

For instance, included in the Pastebin files from Orvis were plaintext usernames and passwords for just about every kind of online service or security product the company has used, including:

-Antivirus engines
-Data backup services
-Multiple firewall products
-Linux servers
-Cisco routers
-Netflow data
-Call recording services
-DNS controls
-Orvis wireless networks (public and private)
-Employee wireless phone services
-Oracle database servers
-Microsoft 365 services
-Microsoft Active Directory accounts and passwords
-Battery backup systems
-Security cameras
-Encryption certificates
-Mobile payment services
-Door and Alarm Codes
-FTP credentials
-Apple ID credentials
-Door controllers

By all accounts, this was a comprehensive goof: The Orvis credentials file even contained the combination to a locked safe in the company’ server room.

The only clue about the source of the Orvis password file is a notation at the top of the document that reads “VT Technical Services.”

Holden said this particular exposure also highlights the issue with third parties, as the issue most likely originated not from Orvis staff itself.

“This is a continuously growing trend of exposures created not by the victims but by those that they consider to be trusted partners,” Holden said.

It’s fairly remarkable that a company can spend millions on all the security technology under the sun and have all of it potentially undermined by one ill-advised post to Pastebin, but that is certainly the reality we live in today.

Long gone are the days when one could post something for a few hours to a public document hosting service and expect nobody to notice when it’s taken down. Today there are a number of third-party services that regularly index and preserve such postings, regardless of how ephemeral those postings may be.

“Pastebin and other similar repositories are constantly being monitored and any data put out there will be preserved no matter how brief the posting is,” Holden said. “In the current threat landscape, we see data exposures nearly as often as we see data breaches. These exposures vary in scope and impact, and this particular one is as bad as they come without specific data exposures.”

If you’re responsible for securing your organization’s environment, it would be an excellent idea to create some tools for monitoring for your domains and brands at Pastebin, Github and other sites where employees sometimes publish sensitive corporate data, inadvertently or otherwise. There are many ways to do this; here’s one example.

Have you built such monitoring tools for your organization or employer? If so, please feel free to sound off about your approach in the comments below.