Internet Security

Texas National Guard Faces Calls to Shoot Migrants After Being Overpowered

CLOSE X By Nick Mordowanec Staff Writer FOLLOW Share Copy Link The Texas National Guard is being encouraged by some social media users to be more violent in response to migrants attempting to enter the United States illegally. Illegal immigration has risen under President Joe Biden’s watch and continues to divide communities around the country

The Texas National Guard is being encouraged by some social media users to be more violent in response to migrants attempting to enter the United States illegally.

Illegal immigration has risen under President Joe Biden’s watch and continues to divide communities around the country, notably border states and cities with sanctuary status.

Videos taken Thursday on the U.S.-Mexico border in El Paso, Texas, showed a throng of migrants, described by on-scene reporters as hundreds of individuals of different nationalities, causing a “riot” and using force to try to overpower soldiers.

The incident occurred simultaneously as Texas waits and sees if it can enforce its own immigration laws, including arrests and deportation, through legislation known as Senate Bill 4 (S.B. 4). The legislation previously approved by state lawmakers continues to be litigated in appeals courts and as high as the U.S. Supreme Court.

Total border crossings exceeded 988,900 individuals between October and December, following a record-setting number of 2.4 million migrant encounters at the southern border in fiscal 2023—up from approximately 1.7 million in 2021.

Newsweek reached out to the Texas Department of Public Safety and other officials via email for comment.

“The TX National Guard & Dept. of Public Safety quickly regained control & are redoubling the razor wire barriers,” Texas Governor Greg Abbott wrote on X, formerly Twitter, following the border incident. “DPS is instructed to arrest every illegal immigrant involved for criminal trespass & destruction of property.”

Republican House Speaker Mike Johnson, a Louisiana Republican, described the scene as “chilling.”

“This is the result of the Biden Administration refusing to secure our border and protect America,” he wrote on X.

El Paso Migrants
Immigrants wait for transport and processing after crossing the U.S.-Mexico border on March 13 in El Paso, Texas. The Texas National Guard is being encouraged by some social media users to be more violent in…


John Moore/Getty Images

Mexican photojournalist J. Omar Ornelas, who lives on the northern border of Latin America, posted a different-angled video on X of the scene in El Paso—showing migrants purportedly from Africa, Central America, Colombia and Venezuela breaching concertina wire to get to the larger border wall.

Charlie Kirk, founder and president of the conservative organization Turning Point USA, wrote on X that having a national border means individuals have to protect it.

“Ultimately, having a border means being willing to have armed men at the border willing to use force to stop those attempting to cross it,” Kirk wrote. “If you aren’t willing to do that, then your border is fake — anyone who wants it badly enough can just force their way in. The world is calling Biden’s bluff.”

In February, Representative Morgan Luttrell, a Texas Republican, introduced the Defend Our Borders from Armed Invaders Act in the U.S. House, authorizing the National Guard to escalate force as necessary to repel an armed individual attempting to illegally enter the U.S. through Mexico.

A spokesperson for the congressman told Newsweek via email on Friday that the legislation applies only to those migrants carrying lethal weapons. The bill currently awaits committee mark-up.

“This border crisis is a full-on invasion, and the Biden Administration continues to recklessly turn a blind eye to the ongoing danger this presents,” Luttrell, a 14-year U.S. Navy veteran, told Newsweek. “I fully support Governor Abbott’s and the Texas Guard’s efforts to secure our border.”

Read more
  • Migrants overpower Texas National Guard, tear down border fence
  • Undocumented immigrants have right to own guns, judge rules
  • Migrants Being Arrested Are Surging

Abbott’s words, meanwhile, sparked some impassioned criticism on social media.

“Lethal force required,” one X user wrote in response to Abbott.

Another X user wrote: “If citizens did that to law-enforcement, they would be tased or shot, and they’d be lucky to be arrested. It’s time to deal harshly with invaders, Governor. We have sonic and millimeter wave crowd-control weapons. It’s time to use them.”

Podcaster and U.S. military veteran Wayne DuPree described the scene on X as an “invasion [that] should be dealt with accordingly,” adding that refugees don’t assault border agents.

“What good are guns at the border if we aren’t going to use them?” political commentator and Donald Trump supporter Gunther Eagleman asked on X.

“An unarmed American female veteran was shot to death at near point-blank range on Jan 6 because a federal officer considered her a threat for invading a public building,” wrote political commentator Julie Kelly. “Hey @LindseyGrahamSC where are your shoot to kill orders for these invaders?”

Update 03/22/24, 11:30 a.m. ET: This article was updated with comment from Luttrell.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

“);jQuery(this).remove()})
jQuery(‘.start-slider’).owlCarousel({loop:!1,margin:10,nav:!0,items:1}).on(‘changed.owl.carousel’,function(event){var currentItem=event.item.index;var totalItems=event.item.count;if(currentItem===0){jQuery(‘.owl-prev’).addClass(‘disabled’)}else{jQuery(‘.owl-prev’).removeClass(‘disabled’)}
if(currentItem===totalItems-1){jQuery(‘.owl-next’).addClass(‘disabled’)}else{jQuery(‘.owl-next’).removeClass(‘disabled’)}})}})})

fairness meter

fairness meter

Newsweek is committed to journalism that’s factual and fair.

Hold us accountable and submit your rating of this article on the meter.

Newsweek is committed to journalism that’s factual and fair.

Hold us accountable and submit your rating of this article on the meter.

Click On Meter
To Rate This Article

Confirm your selection

Comment about your rating

Share your rating

About the writer


Nick Mordowanec


To read how Newsweek uses AI as a newsroom tool, Click here.

The Texas National Guard is being encouraged by some social media users to be more violent in response to migrants attempting to enter the United States illegally.

Illegal immigration has risen under President Joe Biden’s watch and continues to divide communities around the country, notably border states and cities with

Read More

Be the first to write a comment.

Leave a Reply

Your email address will not be published. Required fields are marked *

Internet Security

New FCA Crypto Custody Rules Would Force Firms to Upgrade Security

Regulators in the UK have taken a step closer to formal crypto oversight. The Financial Conduct Authority (FCA) has opened consultations on new rules governing stablecoins and the custody of digital assets. The proposals are part of an effort to establish a safer, more transparent environment for crypto services…

Regulators in the UK have taken a step closer to
formal crypto oversight. The Financial Conduct Authority (FCA) has opened consultations on new rules governing stablecoins and the custody of
digital assets. The proposals are part of an effort to establish a
safer, more transparent environment for crypto services…
Read More

Continue Reading
Internet Security

AI cybersecurity risks and deepfake scams on the rise

close Video Deepfake technology ‘is getting so easy now’: Cybersecurity expert Cybersecurity expert Morgan Wright breaks down the dangers of deepfake video technology on ‘Unfiltered.’ NEWYou can now listen to Fox News articles! Imagine your phone rings and the voice on the other end sounds just like your boss, a close friend, or even a

NEWYou can now listen to Fox News articles!

Imagine your phone rings and the voice on the other end sounds just like your boss, a close friend, or even a government official. They urgently ask for sensitive information, except it’s not really them. It’s a deepfake, powered by AI, and you’re the target of a sophisticated scam. These kinds of attacks are happening right now, and they’re getting more convincing every day.

That’s the warning sounded by the 2025 AI Security Report, unveiled at the RSA Conference (RSAC), one of the world’s biggest gatherings for cybersecurity experts, companies, and law enforcement. The report details how criminals are harnessing artificial intelligence to impersonate people, automate scams, and attack security systems on a massive scale.

From hijacked AI accounts and manipulated models to live video scams and data poisoning, the report paints a picture of a rapidly evolving threat landscape, one that’s touching more lives than ever before.

Join The FREE CyberGuy Report: Get my expert tech tips, critical security alerts, and exclusive deals – plus instant access to my free Ultimate Scam Survival Guide when you sign up!

Illustration of cybersecurity risks.

Illustration of cybersecurity risks. (Kurt “CyberGuy” Knutsson)

AI tools are leaking sensitive data

One of the biggest risks of using AI tools is what users accidentally share with them. A recent analysis by cybersecurity firm Check Point found that 1 in every 80 AI prompts includes high-risk data, and about 1 in 13 contains sensitive information that could expose users or organizations to security or compliance risks.

This data can include passwords, internal business plans, client information, or proprietary code. When shared with AI tools that are not secured, this information can be logged, intercepted, or even leaked later.

Deepfake scams are now real-time and multilingual

AI-powered impersonation is getting more advanced every month. Criminals can now fake voices and faces convincingly in real time. In early 2024, a British engineering firm lost 20 million pounds after scammers used live deepfake video to impersonate company executives during a Zoom call. The attackers looked and sounded like trusted leaders and convinced an employee to transfer funds.

Real-time video manipulation tools are now being sold on criminal forums. These tools can swap faces and mimic speech during video calls in multiple languages, making it easier for attackers to run scams across borders.

Illustration of a person video conferencing on their laptop.

Illustration of a person video conferencing on their laptop. (Kurt “CyberGuy” Knutsson)

AI is running phishing and scam operations at scale

Social engineering has always been a part of cybercrime. Now, AI is automating it. Attackers no longer need to speak a victim’s language, stay online constantly, or manually write convincing messages.

Tools like GoMailPro use ChatGPT to create phishing and spam emails with perfect grammar and native-sounding tone. These messages are far more convincing than the sloppy scams of the past. GoMailPro can generate thousands of unique emails, each slightly different in language and urgency, which helps them slip past spam filters. It is actively marketed on underground forums for around $500 per month, making it widely accessible to bad actors.

Another tool, the X137 Telegram Console, leverages Gemini AI to monitor and respond to chat messages automatically. It can impersonate customer support agents or known contacts, carrying out real-time conversations with multiple targets at once. The replies are uncensored, fast, and customized based on the victim’s responses, giving the illusion of a human behind the screen.

AI is also powering large-scale sextortion scams. These are emails that falsely claim to have compromising videos or photos and demand payment to prevent them from being shared. Instead of using the same message repeatedly, scammers now rely on AI to rewrite the threat in dozens of ways. For example, a basic line like “Time is running out” might be reworded as “The hourglass is nearly empty for you,” making the message feel more personal and urgent while also avoiding detection.

By removing the need for language fluency and manual effort, these AI tools allow attackers to scale their phishing operations dramatically. Even inexperienced scammers can now run large, personalized campaigns with almost no effort. 

Stolen AI accounts are sold on the dark web

With AI tools becoming more popular, criminals are now targeting the accounts that use them. Hackers are stealing ChatGPT logins, OpenAI API keys, and other platform credentials to bypass usage limits and hide their identity. These accounts are often stolen through malware, phishing, or credential stuffing attacks. The stolen credentials are then sold in bulk on Telegram channels and underground forums. Some attackers are even using tools that can bypass multi-factor authentication and session-based security protections. These stolen accounts allow criminals to access powerful AI tools and use them for phishing, malware generation, and scam automation.

WHAT TO DO IF YOUR PERSONAL INFORMATION IS ON THE DARK WEB

Illustration of a person signing into their laptop.

Illustration of a person signing into their laptop. (Kurt “CyberGuy” Knutsson)

MALWARE STEALS BANK CARDS AND PASSWORDS FROM MILLIONS OF DEVICES

Jailbreaking AI is now a common tactic

Criminals are finding ways to bypass the safety rules built into AI models. On the dark web, attackers share techniques for jailbreaking AI so it will respond to requests that would normally be blocked. Common methods include:

  • Telling the AI to pretend it is a fictional character that has no rules or limitations
  • Phrasing dangerous questions as academic or research-related scenarios
  • Asking for technical instructions using less obvious wording so the request doesn’t get flagged

Some AI models can even be tricked into jailbreaking themselves. Attackers prompt the model to create input that causes it to override its own restrictions. This shows how AI systems can be manipulated in unexpected and dangerous ways.

AI-generated malware is entering the mainstream

AI is now being used to build malware, phishing kits, ransomware scripts, and more. Recently, a group called FunkSac was identified as the leading ransomware gang using AI. Its leader admitted that at least 20% of their attacks are powered by AI. FunkSec has also used AI to help launch attacks that flood websites or services with fake traffic, making them crash or go offline. These are known as denial-of-service attacks. The group even created its own AI-powered chatbot to promote its activities and communicate with victims on its public website..

Some cybercriminals are even using AI to help with marketing and data analysis after an attack. One tool called Rhadamanthys Stealer 0.7 claimed to use AI for “text recognition” to sound more advanced, but researchers later found it was using older technology instead. This shows how attackers use AI buzzwords to make their tools seem more advanced or trustworthy to buyers.

Other tools are more advanced. One example is DarkGPT, a chatbot built specifically to sort through huge databases of stolen information. After a successful attack, scammers often end up with logs full of usernames, passwords, and other private details. Instead of sifting through this data manually, they use AI to quickly find valuable accounts they can break into, sell, or use for more targeted attacks like ransomware.

Get a free scan to find out if your personal information is already out on the web 

Poisoned AI models are spreading misinformation

Sometimes, attackers do not need to hack an AI system. Instead, they trick it by feeding it false or misleading information. This tactic is called AI poisoning, and it can cause the AI to give biased, harmful, or completely inaccurate answers. There are two main ways this happens:

  • Training poisoning: Attackers sneak false or harmful data into the model during development
  • Retrieval poisoning: Misleading content online gets planted, which the AI later picks up when generating answers

In 2024, attackers uploaded 100 tampered AI models to the open-source platform Hugging Face. These poisoned models looked like helpful tools, but when people used them, they could spread false information or output malicious code.

A large-scale example came from a Russian propaganda group called Pravda, which published more than 3.6 million fake articles online. These articles were designed to trick AI chatbots into repeating their messages. In tests, researchers found that major AI systems echoed these false claims about 33% of the time.

Illustration of a hacker at work

Illustration of a hacker at work (Kurt “CyberGuy” Knutsson)

HOW SCAMMERS USE AI TOOLS TO FILE PERFECT-LOOKING TAX RETURNS IN YOUR NAME

How to protect yourself from AI-driven cyber threats

AI-powered cybercrime blends realism, speed, and scale. These scams are not just harder to detect. They are also easier to launch. Here’s how to stay protected:

1) Avoid entering sensitive data into public AI tools: Never share passwords, personal details, or confidential business information in any AI chat, even if it seems private. These inputs can sometimes be logged or misused.

2) Use strong antivirus software: AI-generated phishing emails and malware can slip past outdated security tools. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices.

3) Turn on two-factor authentication (2FA): 2FA adds an extra layer of protection to your accounts, including AI platforms. It makes it much harder for attackers to break in using stolen passwords.

4) Be extra cautious with unexpected video calls or voice messages: If something feels off, even if the person seems familiar, verify before taking action. Deepfake audio and video can sound and look very real.

5) Use a personal data removal service: With AI-powered scams and deepfake attacks on the rise, criminals are increasingly relying on publicly available personal information to craft convincing impersonations or target victims with personalized phishing. By using a reputable personal data removal service, you can reduce your digital footprint on data broker sites and public databases. This makes it much harder for scammers to gather the details they need to convincingly mimic your identity or launch targeted AI-driven attacks.

While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice.  They aren’t cheap – and neither is your privacy.  These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites.  It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet.  By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. 

6) Consider identity theft protection: If your data is leaked through a scam, early detection is key. Identity protection services can monitor your information and alert you to suspicious activity. Identity Theft companies can monitor personal information like your Social Security Number (SSN), phone number, and email address, and alert you if it is being sold on the dark web or being used to open an account.  They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. See my tips and best picks on how to protect yourself from identity theft.

7) Regularly monitor your financial accounts: AI-generated phishing, malware, and account takeover attacks are now more sophisticated and widespread than ever, as highlighted in the 2025 AI Security Report. By frequently reviewing your bank and credit card statements for suspicious activity, you can catch unauthorized transactions early, often before major damage is done. Quick detection is crucial, especially since stolen credentials a

!—->
Read More

Continue Reading
Internet Security

Michael Saylor: Proof of Reserves Isn’t Worth the Risk

Speaking at a side event during the Bitcoin 2025 conference in Las Vegas, Saylor called the transparency trend “a bad idea.” He warned that proof of reserves could endanger investors and institutions alike. “Publishing wallet addresses is like handing over a treasure map,” Saylor said. “It dilutes the security of the issuer…

Speaking at a side event during the Bitcoin 2025 conference in Las Vegas, Saylor called the transparency trend “a bad idea.” He warned that proof of reserves could endanger investors and institutions alike. “Publishing wallet addresses is like handing over a treasure map,” Saylor said. “It dilutes the security of the issuer…
Read More

Continue Reading
Internet Security

Solana co-founder’s personal info leaked on Migos’ Instagram in suspected data breach

Key Takeaways Hackers compromised Migos’ Instagram to expose Solana co-founder Raj Gokal’s personal data. A 40 Bitcoin ransom was demanded by the attackers who threatened Gokal after the breach. Share this article The official Instagram account of the famous hip-hop group Migos was apparently hacked on Monday, with the page briefly turning into a leaked

Key Takeaways

  • Hackers compromised Migos’ Instagram to expose Solana co-founder Raj Gokal’s personal data.
  • A 40 Bitcoin ransom was demanded by the attackers who threatened Gokal after the breach.

Share this article

The official Instagram account of the famous hip-hop group Migos was apparently hacked on Monday, with the page briefly turning into a leaked site for sensitive personal information belonging to Solana co-founder Raj Gokal.

According to Andy, co-founder of The Rollup, the compromised account, which has over 13 million followers, posted a series of photos of alleged IDs, passport scans, and other private data linked to Gokal and another individual identified as “Arvind.”

The leaked documents were paired with threatening captions and explicit references to an unpaid crypto ransom, including one post stating, “you should’ve paid the 40 btc,” indicating a failed extortion effort.

The hackers also modified the account’s bio to promote a meme coin scam and shared Telegram links and audio files. One post taunted the victims by referencing their Solana token holdings.

Andy said that the compromised content was visible for about 90 minutes before removal.

Commenting on Andy’s report, blockchain investigator ZachXBT noted that the extortion attempt appeared to follow a week of coordinated social engineering efforts targeting Raj Gokal.

Gokal has not released an official statement. However, his earlier X posts indicated awareness of attempts to breach his personal and professional systems prior to the incident.

Migos’ Instagram account has since returned to normal operation.

This is a developing story.

Share this article

?xml>
Read More

Continue Reading