Sunday 26 June 2022

What Makes Zero Trust Better And Different From Traditional Security

What Makes Zero Trust Better And Different From Traditional Security

Enterprises have already started to embrace zero trust security over traditional security since it offers improved security while simultaneously improving flexibility and reducing complexity. Here’s how zero trust outperforms the traditional model:

Network access

Zero trust security enables users to connect with in-house applications securely. They can get these applications without exposing them to the internet or gaining network access.

On the other hand, traditional security uses the castle and moat concept (everyone inside the network is trusted by default). The user finds it difficult to access the applications from outside and is bound to trust everyone in the network. The problem here is that if a hacker poses as an insider, they get access to everything available within the network.

User identities

Zero trust security accepts no trust units before it awards the user admittance to anything. It also checks other forms of data before giving access to the client. In short, this security model pays heed to who the user is. So, it confirms the user’s identity every time the latter asks for security access.

Traditional security works on an entirely different principle as compared to zero trust. It gives value to where the user is coming from in the network. It utilizes the trust system because the client’s IP address or area characterizes the user identity in the system.

Modern techniques and technologies

Zero trust security tends to the concerns of cloud-facilitated data to re-examine a secure network plan. It solves these issues by accepting that everything is reliable. It grants trust only after the verification and authorization process.

However, traditional security lacks the modern techniques and technologies to monitor a network plan. The lack of these tools and services may compromise the system of the cloud-facilitated data, applications, and users.

What are the Benefits of Zero Trust Security?

Here is how zero trust benefits over traditional security:

  • It helps users gain better visibility across networks and enterprises.
  • It simplifies IT management through continual monitoring and analysis.
  • It enables the security system to work smarter by utilizing the central monitoring functions.
  • It ensures better data protection for networks, applications, and users.
  • It helps secure the remote workforce of an organization by considering identity as the perimeter.
  • It works on automation that enables the user to gain access to everything quickly.
  • It ensures continuous compliance with each access request through evaluation.

Final Thoughts

Zero trust security depends on the possibility that a business must have a default trust option for any element that crosses its border. It verifies anything that attempts to associate with or access the framework. A zero-trust network is different from regular VPNs and firewalls, as it secures access to all applications within an enterprise. Additionally, zero trust replaces traditional security technologies by offering better authentication methods.

So when it comes to taking digital transformation initiatives, proactive protection is required in this new decade. Therefore, a wise move for enterprises will be to implement zero-trust security.


Originally Published at Hackernoon

What Makes Zero Trust Better And Different From Traditional Security | HackerNoon
Traditional vs zero trust? Learn how zero trust outperforms the traditional model by delivering improved security, flexibility and reduced complexity.
What Makes Zero Trust Better And Different From Traditional Security

https://bit.ly/3xVwIbZ
https://bit.ly/3a0YZ8O

https://guptadeepak.com/content/images/2022/06/photo-1441804238730-210ce1c2cc00.webp
https://guptadeepak.weebly.com/deepak-gupta/what-makes-zero-trust-better-and-different-from-traditional-security

Monday 13 June 2022

Top 7 Google Drive Security Mistakes Companies Keep Making

Top 7 Google Drive Security Mistakes Companies Keep Making

You've likely had to work with a file stored on Google Drive at some point in your career. This can be a gift and a curse for you. Sharing essential documents, files, and applications is great for streamlining your workflow.

But, it may not be the most secure option available to you when it comes to account security. Google Drive is a file storage and synchronization service created by Google. Users can store files in the cloud, share files, and edit documents, spreadsheets, and presentations with collaborators.

It's no surprise why it's so popular: It's free, easy to use, and accessible anywhere. But did you know that hackers are targeting Google Drive users?

By password guessing, compromising weak passwords, or phishing campaigns, bad actors gain access to business documents such as intellectual property, financial records, and personally identifiable information (PII) stored on Google Drive.

There are a lot of factors at play when it comes to securing Google Drive's data. This means that security mistakes made near the source of the data like Google Drive tend to be repeated by other companies.

In this post, we'll be outlining seven Google Drive security mistakes that companies across industries are currently making.

Top 7 Google Drive Security Mistakes Companies Keep Making

Cloud data security is a critical consideration for companies moving to the cloud. But even as companies do more with Google Drive, many are still making mistakes that put their data at risk.

We've gathered the top seven Google Drive security mistakes companies make and how to avoid them.

The First Mistake: Using G Suite without Two-Step Verification

Two-step verification is your first line of defense against cyber threats. It works alongside your password to add an extra layer of account protection. You're alerted whenever someone attempts to sign in to your account from an unrecognized device or browser.

To protect users' accounts with two-step verification, go to the Google Admin console and select Security > Set up single sign-on (SSO) with a third party IDP > 2-Step Verification.

Ensure that every employee has enabled two-step verification in their accounts. The same risks apply when an employee accesses their work device or uses their work account on a personal device.

The best way to keep both business and personal accounts safe is for employees to turn on two-step verification for all the accounts they use on their devices.

Second Mistake: Google Files/Folders Should Be Shared Carefully:

If you use Google Drive for work, the chances are that you share some files with your colleagues. But what if you mistakenly share a file with the wrong person? Or does someone leaves your organization but still have access to sensitive documents?

You should frequently audit the shared documents and make it a policy that employees remove any files they no longer need access to. This will prevent any accidental data leakages.

Third Mistake: Not Using Google Vault

Google Vault is a tool that helps organizations manage, retain, search and audit their email, Google Drive files, and on-the-record chats.

It's an essential component of any security strategy, enabling you to detect and investigate the threats you face. If you're not already using Google Vault, find out how it works and start today.

Fourth Mistake: Not Protecting Your Data before Sharing It

The ability to easily share files is one of the benefits of Google Drive. But sharing can also be a security risk. By default, anyone with whom you share a file can edit it, comment on it or share it with others (including people outside your organization).

Take care when sharing files by setting appropriate permissions for each file and folder. For example, if you only want to allow viewing or commenting on a file, don't share it with "can edit" permissions. You can also protect files before transferring them by setting an expiration date or requiring a password to open them.

Fifth Mistake: Not Training Your Employees

Most employees don't understand the risks of using cloud services. Most don't realize that putting sensitive information in cloud services exposes you to more trouble than using an on-premises solution.

This is especially true of millennial employees, who have grown up with the internet and social media and are used to sharing photos, videos, and content online.

Sixth Mistake: Frequently Audit Shared Documents

Another mistake that companies make while using Google Drive is not auditing shared documents from time to time.

Google provides an easy-to-use interface that lets you see who has access to files and folders. You can easily see what type of permission each user has—whether they can only view or comment on a file or whether they can edit it as well.

It is imperative to audit shared documents from time to time because there are chances that some users might have left your company and are still able to access some sensitive documents stored on Google Drive.

Seventh Mistake: Google Drive Is Not For Sensitive File

This may seem obvious, but many people still don't get it. When you share a file through Google Drive, it's available for anyone with access to that drive to see it — even if you didn't intend for them to do so.

Even if you delete the file from your own Google Drive folder, it could still be accessible through another user's account if they downloaded a copy and saved it to their own Google Drive folder.

This means that confidential information could easily be leaked or stolen by an unauthorized person who has access to someone else's account or computer.

Bottom Line

Google Drive contains the stuff businesses hold most dear - their documents and data. But despite this, many companies are repeatedly making the same mistakes with their files. Each of the above errors is a common faux pas that every company should avoid to ensure that their business documents are protected as much as possible.

Google Drive's security benefit is an invaluable investment for companies that take their data security seriously. Successful implementation of Google Drive into a business requires strict security practices and strategies, including employee training on best practices for protecting files.

Given the massive amounts of files stored, this isn't an easy task, but that doesn't mean it isn't essential. Hopefully, these tips will help you keep your data safe. We hope that all companies will take these security measures seriously and act immediately so they do not become another victim of one of these situations.

So if you own a business or are a personal user who has sensitive information on your account, we urge you to start protecting your data by avoiding the mistakes above.


https://bit.ly/3xIH3sS
https://bit.ly/3aQijpt

https://guptadeepak.com/content/images/2022/04/AdobeStock_361331208_Editorial_Use_Only.jpeg
https://guptadeepak.weebly.com/deepak-gupta/top-7-google-drive-security-mistakes-companies-keep-making

Monday 6 June 2022

Should Artificial intelligence (AI) Be Regulated?

Should Artificial intelligence (AI) Be Regulated?

Artificial intelligence combines the elements of computer science and engineering to build intelligent computer programs that help solve global problems. AI works by classifying large volumes of data into actionable information through complex algorithms. Although some have argued that the application of AI is still at its infant stage, its application is already being seen across multiple sectors. For instance, in recent years, AI application has been witnessed in creating expert systems, speech recognition, natural language processing, and machine learning.

AI's potential application across multiple sectors has raised the demand for its use and brought great optimism regarding its ability to provide substantial improvements in working processes and possibly enhance human work. Its far-reaching application has fueled an explosion in its adoption across many sectors. For instance, in the health sector, experts have continued to test and apply various aspects of AI in the performance of administrative duties, documentation, patient monitoring, medical device automation, and image analysis.

Artificial Intelligence (AI) Regulation Debate

The surge in the adoption of AI has sparked heated debate regarding the correctness of introducing regulations that govern its use and application. Proponents of AI regulation have argued that, if unregulated, there was a high likelihood that AI could work against humanity instead of being applied for greater prosperity. One such proponent of regulation is Microsoft Chairman Bill Gates. He has been quoted raising concerns about "superintelligence" and expressing his lack of understanding about why others would not be concerned about the issue. Gates equated failing to regulate Artificial Intelligence to "Summoning the demon." Proponents across the spectrum have continuously made a case for the regulation of AI. There is no telling the lengths to which designers of these technologies could use anonymous data to drive their agenda or for their gain.

However, opponents of AI regulation have continued to call for the deregulation of AI, stating that it would be impossible to regulate all aspects of AI that affect human life. In their argument, they make a case that lawmakers have generally been unsuccessful, in the past, at regulating digital technologies. Opponents of AI regulation argue that a regulatory regime that aims to deal with all uses of Artificial Intelligence technology would be comprehensive in scope. In this regard, it would not make sense to apply the same regulatory regime in facial recognition software as to smart refrigerators, which make grocery orders based on consumer patterns. Instead, however, opponents of regulation propose a strategy whereby issues regarding the use of AI would be approached incrementally, and a regulatory framework adopted based on the issues of concern at that time.

Opponents of regulation have equally argued that regulating AI technologies could stifle growth hence reducing the prospects of it ever achieving its full potential.  AI technology experts such as Alex Loizou have actively opposed any form of regulation of AI before it can be fully understood. As a solution, he has called legislators first to give the technology time to flourish and evolve. All players have a good understanding of it before discussing ways of regulating it.

Emerging Issues regarding Unregulated AI

At the core of the debate on whether to regulate or not to regulate AI is that this technology relies on its large volumes of data. Proponents of regulation have argued that since data is not tangible property, it could be misused if it fell into the wrong hands. This data can interfere with individual privacy rights, database rights, copyright, and confidentiality rights in many ways. Already, there are several instances of AI applications gone awry, leading to severe violations against the victims.

According to an article by "The Guardian", the application of AI has not always yielded the desired outcomes. For instance, an overreliance on AI use in facial recognition systems led to more than 1,000 airline travelers flagging. In one case, an American Airline Pilot faced detention at least 80 times during his work since his name resembled that of a terrorist leader. In another instance, black contestants in a beauty contest were denied any win since the AI technology used to pick out winners had been trained predominantly on white women

Regulatory Response to Unregulated AI

The European Union is one such organization that has been quick to regulate AI use and its application to protect its member states from specific harmful AI-enabled practices. In its newest proposal, the European Union proposes to regulate the digital sector through the General Data Protection Regulation (the GDPR), the proposed Digital Services Act, and the proposed Data Governance Act. In the GDPR, the EU regulation introduces a four-tier system of risk to allow or prohibit the use of AI. AI regulation in the EU generally classifies AI systems as prohibited AI or Highly regulated AI. Regulation deems AI as Highly Regulated AI ("high-risk") if they pose a high risk to human beings' health and safety or fundamental rights.

Prohibited AI Systems are deemed as such if they contravene EU values or present an unacceptable risk to the fundamental rights of its citizens. It is noteworthy that the recommendations proposed in the regulation stem from the understanding that some algorithms deployed in AI applications have the potential to have direct consequences on people's lives and affect their decisions. For instance, AI is now being used to diagnose medical conditions, approve loans, select candidates for shortlisting, and recommend court penalties. In such cases, as in many other cases, the impact of AI use is enormous; hence this makes regulation imperative.

In regulating AI, the EU hopes to:

  • Establish, implement, document, and maintain a risk management system.
  • Establish transparency and information to end-users of AI technologies
  • Provide a framework for data management and governance
  • Ensure that AI systems undergo a conformity assessment procedure before releasing to the market.
  • Promptly correct issues regarding AI system non-compliance with existing AI regulation

Benefits of Regulating AI

It is perhaps not in doubt that regulation of AI creates a sense of confidence in the AI technologies being developed, perhaps because regulation helps safeguard and protect fundamental human rights. It is noteworthy that the use of AI has, in several instances, been seen to breach the rights of individuals on the grounds of race, religion, and sex. Regulation of AI is estimated to bring fairness and reason in the design of technologies that work towards improving the lives of human beings.

It is equally noteworthy that regulation helps ensure that infringements on fundamental human rights are kept at bay during the application of AI across sectors. For example, regulation may protect victims using the criminal justice system, making their sentencing solely based on machine learning. Regulation may, in this effect, help ensure that bad decisions made by machines are not used to deny defendants their fundamental rights. Regulation may also ensure that individuals are protected from unlawful detention based on a flawed facial recognition system. In the long-term, it is estimated that such frameworks will help create a platform for creating accountable AI systems that are above reproach and protect users and the general public from misuse or mishandling of their data to deny them their fundamental rights.

Should AI be Regulated or Not?

It is perhaps apparent that Artificial Intelligence technologies affect almost all spheres of our lives. AI use can improve our lives in ways that we never deemed possible through explaining the reasoning behind certain decisions or events, accurate prediction, and lessening human workload. However, it is equally noteworthy that the use of AI technologies can disrupt human existence and infringe on their fundamental rights. Thus, it is perhaps more reasonable to suggest that AI technologies be regulated to minimize risk to the fundamental human rights of all users. However, regulation should be approached in such a manner that makes sense and does not discourage using these technologies. In this regard, the law should create an enabling framework for responsible AI use that is conscious of the risks involved in applying AI technologies. In the long-term, it is anticipated that this approach will help safeguard both innovators engaged in the design and rollout of these technologies and their end-users.


https://bit.ly/3Q0rSSN
https://bit.ly/3Q5E70k

https://guptadeepak.com/content/images/2022/04/AdobeStock_318111476-1.jpeg
https://guptadeepak.weebly.com/deepak-gupta/should-artificial-intelligence-ai-be-regulated

Busting Common Passwordless Authentication Myths: A Technical Analysis

Cyber threats continue to evolve for enterprises and passwordless authentication emerges as a transformative approach to digital security...