The post Trump endorses Dave McCormick for Senate in Pennsylvania – CNN first appeared on Trump And Trumpism – The News And Times.
Day: April 14, 2024
How well are security leaders sleeping at night? According to a recent Gigamon report, it appears that many cyber professionals are restless and worried.
In the report, 50% of IT and security leaders surveyed lack confidence in knowing where their most sensitive data is stored and how it’s secured. Meanwhile, another 56% of respondents say undiscovered blind spots being exploited is the leading concern making them restless.
The report reveals the ongoing need for improved cloud and hybrid cloud security. Solutions to reveal blind spot vulnerabilities are urgently necessary as well.
Strong cloud and hybrid cloud security strategy needed
The worries exposed in the Gigamon report aren’t due to an active imagination on the part of cyber pros. Attacks are bombarding the security front lines. The report cites that 90% of those surveyed have suffered a data breach in the last 18 months.
As per the report, many IT and security teams lack critical visibility across data in motion from on-premises to the cloud. And they may not acknowledge these blind spots precisely because they can’t see them.
To manage a cohesive hybrid, multi-cloud security program, teams clearly need to establish visibility and control. This means integrating the appropriate controls, orchestrating workload deployment and establishing effective threat management.
Some solutions involve both cloud-native security controls and secure-by-design methodology. Furthermore, security orchestration and automation should be established to beef up protection further.
Explore data security solutions
Where’s your data?
The continued struggle with data location has also been impacted by regulatory action. For example, the GDPR requires that users’ personal data and privacy be adequately protected by organizations that gather, process and store that data.
All this has given rise to concerns about data residency (data must be stored where it’s collected), data localization (data must remain in a specific place) and data sovereignty (rights and control over data based on jurisdiction).
However, cloud data residency is complicated by how cloud resources are deployed and used. For example, with dynamic cloud provisioning, resources are allocated upon demand, which can increase the attack surface. Furthermore, transient microservices in the cloud can result in data access and movement that is hard to detect and monitor.
Given these concerns, how can a security pro get any rest at all?
Know your data’s whereabouts
Ensuring data residency relies on two critical capabilities: localization and compliance monitoring. Localization technology detects the whereabouts of data, its copies and any movement within the cloud. Compliance monitoring technology centralizes, analyzes and reports on the adherence of cloud environments to regulatory requirements.
A Data Security Posture Management (DSPM) platform offers these capabilities by enhancing visibility into user activities and behavioral risks, aiding organizations in regulatory compliance. DSPM identifies the location of data and its copies stored in the cloud. DSPM also tracks data flows to and from cloud resources that may pose security risks.
Exposing data blind spots
What about those blind spots keeping security teams up at night? Attack Surface Management (ASM) can help by continuously monitoring IT infrastructure to detect blind spots and remediate potential points of attack.
This may involve deploying network monitoring tools capable of inspecting encrypted traffic, implementing cloud-native security controls and integrating cloud SIEM systems to correlate security events across on-premises and cloud environments.
Additionally, organizations should regularly assess their attack surface and adjust security measures accordingly to adapt to evolving threats and infrastructure changes.
The four core processes in attack surface management include:
- Asset discovery: Automatically scans for entry points. Assets include computers, IoT devices, databases, shadow IT and third-party SaaS apps.
- Classification and prioritization: Assigns a risk score based on the probability of attackers targeting each asset. Teams can categorize the risks and establish a plan of action to fix issues.
- Remediation: Involves fixing issues with operating system patches, debugging or enhancing data encryption.
- Monitoring: Continuous scanning for new vulnerabilities and remediating attack vectors in real time.
Security teams want peace of mind. Solutions such as cloud security strategy services and attack surface management just might help them rest a bit easier.
The post Cloud security uncertainty: Do you know where your data is? appeared first on Security Intelligence.
Ransomware payments hit an all-time high of $1.1 billion in 2023, following a steep drop in total payouts in 2022. Some factors that may have contributed to the decline in 2022 were the Ukraine conflict, fewer victims paying ransoms and cyber group takedowns by legal authorities.
In 2023, however, ransomware payouts came roaring back to set a new all-time record. During 2023, nefarious actors targeted high-profile institutions and critical infrastructure, including hospitals, schools and government agencies.
Still, it’s not all roses for ransomware gangs. Many top-tier groups are struggling to adapt to talent scarcity, Russia-Ukraine war fatigue and repeated disruptions by law enforcement. Let’s take a look at the state of ransomware security today.
New record for ransomware payouts
In 2023, ransomware actors staged a major comeback. This included record-breaking payments and a substantial increase in the scope and complexity of attacks, according to a recent Chainalysis report.
In 2022, a major drop in attacks led to a $416 million decline in ransoms paid (a total of $567 million) compared to 2021. But in 2023, ransomware attacks surged to establish a new record in ransoms paid at $1.1 billion.
As per Chainalysis, reasons for the 2022 decline include the Ukraine War, as some cyber actors diverted their actions toward political motives rather than financial ones. Another factor includes an increasing trend of victims’ reluctance to pay ransoms. Finally, the takedown of ransomware groups, such as the massive Hive variant, also put a damper on malicious activity in 2022.
Meanwhile, factors that contribute to the growing total ransomware payments seen in 2023 include:
- Huge growth in the number of threat actors carrying out attacks, with at least 538 new ransomware variants detected in 2023
- Big game hunting leads to a larger share of ransomware payments made up of $1 million or more
- Ransomware-as-a-Service (RaaS) makes easy-to-use, malicious tools widely available.
Read the Threat Intelligence Index report
Struggling ransomware groups
Although the dollar totals are rising, some ransomware groups have actually been struggling lately. According to Marley Smith, Principal Threat Researcher at RedSense, many RaaS groups must recruit highly skilled (and scarce) contractors to access the penetration testing talent required to carry out attacks against large targets. “Things are just getting increasingly complex and almost desperate in terms of the ability to continue operations,” Smith said.
Meanwhile, Yelisey Bohuslavskiy, Co-Founder and Chief Research Officer at RedSense, says that many ransomware practitioners live “really traumatized” lives due to the Russia-Ukraine war. “The top-tier ransomware groups consist of Russians, Belarusians and Ukrainians, and half of them are now in this very strange situation when they still know each other and chat constantly. But their countries are at war, and they need to figure out how to work together while being at war.”
Don’t pay ransomware
Winning the war against ransomware requires the right technology as well as a collaborative effort between law enforcement, product makers and organizations. If companies don’t do their part, such as being alert for social engineering attacks, it’s impossible to stop ransomware. But things are changing. Enterprises are no longer getting completely devastated by data encryption attacks. And it’s not uncommon for victims to recover their ransomware payments.
In 2021, the U.S. Treasury established reporting requirements that victims of ransomware should follow. As per Coveware, after these guidelines were released, completing due diligence before any payment has become a normal best practice within the incident response industry. Reporting was also not a regular best practice until after the release of the guidelines. The U.S. Treasury guidelines sparked an increase in reporting to law enforcement. They also created a diligence framework and standard for how victims could avoid paying a sanctioned actor.
Many entities, including IBM, strongly advise against paying ransomware. Instead, follow best practices, check out IBM’s Definitive Guide to Ransomware and keep your shields up.
The post Ransomware payouts hit all-time high, but that’s not the whole story appeared first on Security Intelligence.
While the race to achieve generative AI intensifies, the ethical debate surrounding the technology also continues to heat up. And the stakes keep getting higher.
As per Gartner, “Organizations are responsible for ensuring that AI projects they develop, deploy or use do not have negative ethical consequences.” Meanwhile, 79% of executives say AI ethics is important to their enterprise-wide AI approach, but less than 25% have operationalized ethics governance principles.
AI is also high on the list of United States government concerns. In late February, Speaker Mike Johnson and Democratic Leader Hakeem Jeffries announced the establishment of a bipartisan Task Force on AI to explore how Congress can ensure that America continues to lead global AI innovation. The Task Force will also consider the guardrails required to safeguard the nation against current and emerging threats and to ensure the development of safe and trustworthy technology.
Clearly, good governance is essential to address AI-associated risks. But what does sound AI governance look like? A new IBM-featured case study by Gartner provides some answers. The study details how to establish a governance framework to manage AI ethics concerns. Let’s take a look.
Why AI governance matters
As businesses increasingly adopt AI into their everyday operations, the ethical use of the technology has become a hot topic. The problem is that organizations often rely on broad corporate principles, combined with legal or independent review boards, to assess the ethical risks of individual AI use cases.
However, according to the Gartner case study, AI ethical principles can be too broad or abstract. Then, project leaders struggle to decide whether individual AI use cases are ethical or not. Meanwhile, legal and review board teams lack visibility into how AI is actually being used in business. All this opens the door to unethical use (intentional or not) of AI and subsequent business and compliance risks.
Given the potential impact, the problem must first be addressed at a governance level. Then, subsequent organizational implementation with the appropriate checks and balances must follow.
Four core roles of AI governance framework
As per the case study, business and privacy leaders at IBM developed a governance framework to address ethical concerns surrounding AI projects. This framework is empowered by four core roles:
-
Policy advisory committee: Senior leaders are responsible for determining global regulatory and public policy objectives, as well as privacy, data and technology ethics risks and strategies.
-
AI ethics board: Co-chaired by the company’s global AI ethics leader from IBM Research and the chief privacy and trust officer, the Board comprises a cross-functional and centralized team that defines, maintains and advises about IBM’s AI ethics policies, practices and communications.
-
AI ethics focal points: Each business unit has focal points (business unit representatives) who act as the first point of contact to proactively identify and assess technology ethics concerns, mitigate risks for individual use cases and forward projects to the AI Ethics Board for review. A large part of AI governance hinges upon these individuals, as we’ll see later.
-
Advocacy network: A grassroots network of employees who promote a culture of ethical, responsible and trustworthy AI technology. These advocates contribute to open workstreams and help scale AI ethics initiatives throughout the organization.
Risk-based assessment criteria
If an AI ethics issue is identified, the Focal Point assigned to the use case’s business unit will initiate an assessment. The Focal Point executes this process on the front lines, which enables the triage of low-risk cases. For higher-risk cases, a formal risk assessment is completed and escalated to the AI Ethics Board for review.
Each use case is evaluated using guidelines including:
-
Associated properties and intended use: Investigates the nature, intended use and risk level of a particular use case. Could the use case cause harm? Who is the end user? Are any individual rights being violated?
-
Regulatory compliance: Determines whether data will be handled safely and in accordance with applicable privacy laws and industry regulations.
-
Previously reviewed use cases: Provides insights and next steps from use cases previously reviewed by the AI Ethics Board. Includes a list of AI use cases that require the board’s approval.
-
Alignment with AI ethics principles: Determines whether use cases meet foundational requirements, such as alignment with principles of fairness, transparency, explainability, robustness and privacy.
Benefits of an AI governance framework
According to the Gartner report, the implementation of an AI governance framework benefited IBM by:
-
Scaling AI ethics: Focal points drive compliance and initiate reviews in their respective business units, which enables an AI ethics review at scale.
-
Increasing strategic alignment of AI ethics vision: Focal points connect with technical, thought and business leaders in the AI ethics space throughout the business and across the globe.
-
Expediting completion of low-risk projects and proposals: By triaging low-risk services or projects, focal points enable the capability to review projects faster.
-
Enhancing board readiness and preparedness: By empowering focal points to guide AI ethics early in the process, the AI Ethics Board can review any remaining use cases more efficiently.
With great power comes great responsibility
When ChatGPT debuted in June 2020, the entire world was abuzz with wild expectations. Now, current AI trends point towards more realistic expectations about the technology. Standalone tools like ChatGPT may capture popular imagination, but effective integration into established services will engender more profound change across industries.
Undoubtedly, AI opens the door to powerful new tools and techniques to get work done. However, the associated risks are real as well. Elevated multimodal AI capabilities and lowered barriers to entry also invite abuse: deepfakes, privacy issues, perpetuation of bias and even evasion of CAPTCHA safeguards may become increasingly easy for threat groups.
While bad actors are already using AI, the legitimate business world must also take preventative measures to keep employees, customers and communities safe.
ChatGPT says, “Negative consequences might encompass biases perpetuated by AI algorithms, breaches of privacy, exacerbation of societal inequalities or unintended harm to individuals or communities. Additionally, there could be implications for trust, reputation damage or legal ramifications stemming from unethical AI practices.”
To protect against these types of risks, AI ethics governance is essential.
The post What should an AI ethics governance framework look like? appeared first on Security Intelligence.
Donate to Palmer Report.
—–
Palmer Report readers: sign up for our free mailing list here
When Donald Trump waddled into a Chick-Fil-A in Atlanta this past week and feebly ordered milkshakes for customers, the whole thing felt more than a little off. Not one customer in the restaurant seemed to have a problem with Trump being there at all. Even in a place like Chick-Fil-A, what were the odds that 100% of the people inside just happened to be pro-Trump?
We said the whole thing felt like one big setup. Now it’s starting to look like precisely that. It turns out the “random” customer who hugged Trump was instead a Republican consultant with strong ties to the Georgia GOP. What are the odds that she just randomly happened to be eating in the same Chick-Fil-A that Trump happened to walk into?
. . .
In other words, it’s pretty obvious that Trump’s attempt at a viral moment – walking into a restaurant and getting a hug from a Black woman who happened to be eating there – was indeed staged. This raises additional questions about what all else was staged about the visit. When it comes to Trump, these things are simply never authentic.
—–
Palmer Report readers: sign up for our free mailing list here
—–
Palmer Report readers: sign up for our free mailing list here
The post Told you Trump’s Chick-Fil-A visit was a setup appeared first on Palmer Report.
(The Hill) — Former President Trump mocked judges who have been a part of his various legal battles on Saturday, asking which of them is the worst or the most corrupt.
It continues an assault on the judges who have been appointed to sit over his various trials, and comes just before his hush-money trial is set to begin in New York on Monday.
“Who is the WORST, most EVIL and most CORRUPT JUDGE? Would it be Judge Arthur Engoron, Judge Lewis Kaplan or, could it be that my current New York disaster, Judge Juan Merchan, is the WORST?” Trump said in a Truth Social post.
Merchan will preside over the hush money trial, when when it begins on Monday will represent the first time a former U.S. president sits for a trial on criminal charges in U.S. history.
“They are all from violent crime (without retribution!) filled New York, are really bad Judges, are extraordinarily conflicted and unfair, and most obviously to all, suffer from a rare but very lethal disease, TDS, commonly known as TRUMP DERANGEMENT SYNDROME,” the former president wrote.
Engoron and Kaplan oversaw both Trump’s New York civil fraud case and his E. Jean Carroll defamation case.
Trump has previously lashed out at Merchan’s daughter Loren, who has served as an executive at a progressive political consulting firm that has worked for Democrats including President Biden and Vice President Harris.
Loren Merchan appeared to be behind an account on the social platform X that used a photo illustration of Trump in prison as a profile picture, according to The Associated Press. A court spokesperson said the account was no longer linked to her; it has since gone private, and the photo has been updated.
“So, let me get this straight,” Trump wrote in a Truth Social post.
“The Judge’s daughter is allowed to post pictures of her ‘dream’ of putting me in jail, the Manhattan D.A. is able to say whatever lies about me he wants, the Judge can violate our Laws and Constitution at every turn, but I am not allowed to talk about the attacks against me, and the Lunatics trying to destroy my life, and prevent me from winning the 2024 Presidential Election, which I am dominating?”
I used to believe that a Caliphate should be established and that all lands should be turned into Muslim lands,”
says the @ApostateProphet in an interview with V24 founder @StefanTompson, and shockingly reveals that he was raised and educated in Germany.pic.twitter.com/4yFLqEHwt0
— Visegrád 24 (@visegrad24) April 14, 2024
The post @visegrad24: I used to believe that a Caliphate should be established and that all lands should be turned into Muslim lands,” says the @ApostateProphet in an interview with V24 founder @StefanTompson, and shockingly reveals that he was raised and educated in Germany. first appeared on JOSSICA – The Journal of the Open Source Strategic Intelligence and Counterintelligence Analysis.
President Biden told PM Netanyahu that he should consider tonight’s events a ‘win’ for Israel, according to CNN.
— Visegrád 24 (@visegrad24) April 14, 2024
The post @visegrad24: President Biden told PM Netanyahu that he should consider tonight’s events a ‘win’ for Israel, according to CNN. first appeared on JOSSICA – The Journal of the Open Source Strategic Intelligence and Counterintelligence Analysis.