The concept of a penetration test

A penetration test – also known as a pentest – is a security and risk assessment of an application(s) and/or a system(s) performed by an IT security professional using manual, and sometimes, automated tests. While automated tests are meant to find low hanging fruits and maximize the coverage of the assessment, manual assessments are still required as many use cases cannot be tested or accurately identified with automated tests.

Continue reading The concept of a penetration test

Save time by switching to LinOTP – today

The well-known administrator mantra “Never change a running system” is not accurate anymore, given today’s speed of IT technology development. In fact, regular changes have become a necessity to keep up with competitive markets. This is particularly true, if the new technology is driven by steady development to avoid unnecessary issues in the foreseeable future.

LinOTP brings substantial benefits for MFA-backed environments. It has no token vendor lock-in, it is open source and API-first developed. LinOTP is easy to set up and to integrate in the first place – it takes only about half a day in a standard environment. And we make sure that transitions from existing MFA solutions to LinOTP are stable, fast and painless for architects as well as for the performing administrators and the users.

Continue reading Save time by switching to LinOTP – today

What does LinOTP’s API-first development mean for you?

LinOTP – the open source MFA solution – is developed with an API-first strategy in mind. For us at KeyIdentity this does not mean to dogmatically follow each and every REST guideline but to think about the easiest yet most flexible way of introducing new features to our API in terms of simplicity of integration before the feature is actually implemented, while remaining backwards compatibility. Therefore, our API for all of our customers is feature complete.
For an integration product such as LinOTP, an easy integration into the user’s environment is probably the most important key feature. While historically LinOTP’s most used integration practice is based on the RADIUS protocol together with the FreeRADIUS server shipped with the KeyIdentity LinOTP Smart Virtual Appliance (SVA), the HTTP based API recently gains more and more importance. Especially for web applications LinOTP’s HTTP based API allows for easier and deeper integrations.
LinOTP features a stateless HTTP based API for validation, returning responses in the simple-to-parse JSON format. Request parameters may be sent as URL encoded data in a POST request’s body. This article will show what the API-first strategy means for you and how to integrate LinOTP into your own web applications.
To demonstrate LinOTP’s API by example, we show you how to integrate the QR Token into your environment.

Continue reading What does LinOTP’s API-first development mean for you?

Is your password putting you at risk?

One major cause of data breaches is the stolen password. Once hackers have an email address and password, a world of possibilities are open to them. The dangers are not just limited to the account they have access to. Their hacker’s next steps usually include not only selling the details to other criminals but also “credential stuffing”; taking the login details for one account and trying it on others. Imagine if your ISP account was hacked – your work email, online shopping and other accounts most people possess would be targeted.

Companies would do well to introduce Two Factor or Multi-Factor Authentication to protect their employee and customer digital identities. Put simply, this requires another authentication criteria to be satisfied before granting access to a site or account. Many large corporates are turning to 2FA to help derisk their customer’s exposure to a data theft. Sony Playstation, Apple, Instagram, and Gmail all offer this additional security measure.

Continue reading Is your password putting you at risk?

FIDO U2F: what it is and how you can secure your web applications using LinOTP

This is the first part of a series of blog entries about FIDO U2F and how you can use FIDO U2F and LinOTP to secure your web applications.

Kicking off, we would like to introduce you to FIDO U2F and explain the idea behind it. Following blogs will be about the protocols and how you can use LinOTP to integrate FIDO U2F in your application.

What is FIDO U2F?

FIDO U2F is a technical specification defining a mechanism to reduce the reliance on passwords to authenticate users. It can be used to enrich a password-based authentication with a second factor or to replace the password-based login completely, depending on the use case.

FIDO U2F is developed by the FIDO Alliance (KeyIdentity is a member) and actively extended to new authentication models and markets. The driving idea behind FIDO U2F is to allow the user to bring their own token to their registration process and allow you to securely validate the identity of the user going forward and the user only having to use one token for all websites without compromising security.

U2F_TheUserExperience

Source: FIDO Alliance

USB, NFC and Bluetooth are now defined as transport protocols and a wide range of devices is available to make use of them. Your users can decide on the method and vendor they prefer, based on costs, design or availability. The FIDO U2F implementation on the side of the web application is the same for all tokens implementing the FIDO specifications.

FIDO U2F is based on public key cryptography. When the user registers at your site, a key pair specific to your site is generated in the FIDO U2F token and, depending on the device, is stored on the token. The public key is then registered in your LinOTP backend. When the user authenticates later on, a challenge is presented to the FIDO U2F token and proof of the possession of the private key is presented by signing the challenge. The FIDO protocols are designed to protect the user’s privacy. It is not possible to track a user across services even though the same token is used.

The handling of the device and the communication with the USB, NFS or Bluetooth transportation protocols is provided by the user’s browser and built-in or available as a plug-in. Currently only Google Chrome has built-in support, but support by Microsoft and plug-ins for Firefox are available.

FIDO U2F is still a pretty young standard, but adoption is picking up. After being developed mainly by Google and Yubico, the FIDO Alliance now has an impressive set of members and the range of specifications grew actively and in interesting areas over the last year.

This was just a quick introduction, in the following parts we will look at the registration and authentication process and how an implementation of FIDO U2F can look.

 

Why biometric authentication isn’t a silver bullet

There has been a lot of noise in the press recently about the rising tide of biometric authentication. The concept has been around for longer than many might think. For example, facial recognition was tested at the Superbowl in 2001, though the results were not widely circulated.

A few pioneering companies (particularly banks) are rolling out biometric trials, such as Standard Chartered in Asia, with finerprint and later voice recognition. In Singapore in particular, two rivals have both piloted voice authentication, DBS for customers dialling their call centre and OCBC for transaction authentication.

It’s not surprising – think of all the positives; easy-to-use, unique to the user, hard to share, tied to the individual’s own physical attributes and frankly, „cool“, as there is a sense of this is how our identities should be verified in a digital age.

Nothing could be meet the „something you are“ requirement than your voice, fingerprint or retina, so how can there be any downsides?

No security solution is without its drawbacks, and in the face of the biometric bandwagon, awareness of the following challenges helps balanced decision-making with all of the facts to hand:

You can’t change a fingerprint or retina scan: whilst of course this is in one sense a strength, it’s also a weakness. If your fingerprint is stolen and then used elsewhere there could be major financial and other wider implications. Unlike refreshing a password, how do you create a new fingerprint? It’s not so easy.

Biometrics are hackable: yes, even your fingerprints are. Tsutomo Masumoto made a working model based on „gummy bear“ material, initially from a live fingerprint and later from a fingerprint left on a physical object.

Creepy vs cool: a recent retail study found signifcant dfferences aomgst consumers in how they viewed a store’s knowledge about them. Whilst some groups saw recognising them by name as they walked the floor as „cool“, others found the possession of certain information to be „creepy“. Not every user wants to share their physical details with a retail outlet for example.

Legalities: data security and privacy are seen as highly important in Germany, and whilst there are variations amongst countries in the way these topics are viewed, who holds biometric data, where they store it, how it is used, and which organisations they share this with have many political, ethical and legal implications, and given how new biometrics are, many legal precedents have not yet been established. Facial recognition is legal in many US states for example, yet in other parts of the world this may not be the case.

False positives: Imagine the accuracy of biometric readings is 98-99% – that’s pretty good, no? Not if you have 10,000 employees entering offices around the world or logging in each day. 98% accuracy means 200 colleagues will not be able to start work on time. Imagine an issue with a fingerprint sensor at an entry door to the building and the queue of impatient co-workers behind the unfortunate blocked user. How many security teams look forward to a mass resetting of entry systems?

Individual use vs high volume: whilst fingerprint recognition might work to access a personal smartphone, it may not be suitable for far higher volume authetication requirements. If hundreds of people are entering a building at the same time,

Don’t underestimate a hacker’s determination: with every new security technology announced, there is sure to be a group of hackers eagerly awaiting the challenge of overcoming it, biometric or not. Retina and facial recognition for example is already being tricked by hi-res photographs of the individual, 3D models and more. Phone calls can be recorded to capture voices, keyboard strokes recorded to learn the typing cadence, and so forth. Whilst this is a lot of work to crack each account, high net worth individuals or celebrities may be viewed as targets worth investing time in.

If you’d like to dive deeper into the topic, there is a great Wired article summarising the legal, technical and ethical complexity involved in biometric authentication.

In the meantime, review any authentication option with an open mind and keep asking the „What if?“ questions. Explore the volume of users, use cases and level of security required; not every solution matches every scenario.

 

Why it’s time to revisit your red and blue team approach

Anyone who has read the recent news of Yahoo’s data breach which affected around 500 million accounts will probably have questioned their own organization’s ability to defend itself against external attacks of all sorts.

The task of maintaining defences in the face of constant threats is often partly owned by two IT security groups, the “red”and “blue” team.:

Red: focused on testing the effectiveness of the organization by acting as hackers, using penetration testing techniques to identify and expose vulnerabilities. They will use offensive tools and use SQL injection, scan the network and be familiar with firewall and router commands.

Blue: take the role of defending the organisation, being constantly vigilant and ready to respond to any attacks. They will be expected to recognize unusual patterns, behaviours or outliers, and establish how and where attacks are about to take place. The blue team monitors the systems such as the central log file management system and  scans this for signs of attempted entry.

Whilst this role playing is a familiar exercise, there are potentially dangers if the approach is not regularly reviewed:

  • The mindset and culture developed in an organisation over time can inhibit fresh thinking both in terms of where and how to typically attack, and equally defend against these attacks. It does not prepare teams for a concerted attack by strangers who have no respect for the system.
  • Teams can become stuck in their ways and “go through the motions”, repeating similar attacks to the last role play.
  • As Einstein once said, “We can’t solve problems by using the same kind of thinking we used when we created them”. Unless exceptional, over time, many employees become conditioned by their surroundings and view situations based on their perception of established norms, and the prevailing culture. This can restrict fresh thinking and lead to a narrow testing focus.

    There a number of activities which can help keep the red/blue team sharp and effective:

  • Regular rotation: it is recommened to switch parts of each group g. 50% change sides on a frequent basis. This improves cross-team skills and also creates a view on how „the other half think“.
  • Full debriefs: after each game play has taken place, each team should explain and document how they were successful (either in attacking or defending), so learnings are formalised and captured.
  • Continuous learning: funds and time permitting, create an education budget for each team member where they can choose to attend a conference, external course or online learning and increase their knowledge base. It demonstrates investment in talent and also assists team morale.
  • Incentivise: introduce a trophy that is passed between teams (e.g. for not being hacked this quarter/half year etc), with the red and blue team exchanging ownership based on which was successful in the last role play.
  • Review the team composition: typically in a team of 10 people, three would be responsible for IT Sec Engineering, 5-7 would take a SecOps/Incident response (usually outsourced) role, and two would act as pen testers. How does your team’s make-up look?
  • Explore 3rd party participation: a real attacker doesn’t play by the rules or follow established thinking, and is going to overlook any rule, etiquette, company guidelines and ethical issues. Sometimes a genuine outsider approach is needed that does the unexpected, not permitted, daring or simply blindsides the blue team.

FOXMOLE’s penetration testing team has extensive experience in responsibly attacking client sites to identify weaknesses, whether based on an open brief or a speciifc area of concern.

The greatest opportunity offered by commissioning an external group is the discovery of pervasive, underlying vulnerabilities that have not been addressed as these were simply not on the radar. Remedial action plans can be developed in conjunction with clients, with scheduled progress review points.

 

 

An open source core: the answer to cryptographic back doors?

What is a cryptographic back door?

“A backdoor is an intentional flaw in a cryptographic algorithm or implementation that allows an individual to bypass the security mechanisms the system was designed to enforce. A backdoor is a way for someone to get something out of the system that they otherwise would not be able to. If a security system is the wall, a backdoor is the secret tunnel underneath it.”
How the NSA (may have) put a backdoor in RSA’s cryptography: A technical primer, by Nick Sullivan, January 6th 2014

For any organisation concerned at the possibility of cryptographic backdoors being built into the authentication solution they invest into, open source software (OSS) can be seen as offering an alternative, for several reasons:

  • A closed-source system is easier to contain malicious elements, because OSS has a greater potential of any risk areas being discovered by the open source community.
  • Contrary to the perspective that releasing code benefits attackers because hostile audiences can see OSS code, attackers are able to reverse engineer binary (proprietary) code patches in minutes and generate exploits. Security by obscurity has never been a solid approach. Multiple academic papers demonstrate how easy it is, „in some of the cases they tried, they claimed to be able to create an exploit in minutes after receiving the patch and comparing the patched version of the application with the unpatched version.“ https://isc.sans.edu/forums/diary/The+Patch+Window+is+Gone+Automated+PatchBased+Exploit+Generation/4310/
  • OSS offers the IT security team the opportunity to audit the code and conduct proper due-diligence.
  • OSS gives the IT security team the possibility to even adjust the code to their own needs if possible. Customers can, but do not have to, take part in the development of the code.
  • If source-code is public-available, and a maintainer stops working on it for whichever reason, it still can be developed and maintained by anybody else.

For any organisation concerned at the possibility of cryptographic backdoors being built into the authentication solution they invest into, open source software (OSS) can be seen as offering an alternative, for several reasons. A closed-source system is easier to contain malicious elements, because OSS has a greater potential of any risk areas being discovered by the open source community.

While proprietary vendors have argued that their software is more secure because it is secret, this can be countered with the view that closed source is so easy to use that weak crypto or implementing a crypto back door by selecting fixed numbers as parameters can occur, whilst in OSS this is not possible.OSS offers the IT security team the opportunity to audit the code and conduct proper due-diligence and even adjust the code to their own needs if possible.

With Open Source at its core, LinOTP reduces the risks associated with proporietary software.

 

 

 

Five typical enterprise security fails

At FOXMOLE, we have met with many large organisations and whilst they are all different in terms of their particular security challenges, there have been a number of commonalities observed:

Lack of mitigations

One example of this is the absence of a patch process, which is surprisingly frequent. Once a vulnerability with an internal or external application has been identified, how is a patch issued, and how quickly is the fix implemented? The issue is that the processes are not reoccurring as frequently as they should, leaving a window of opportunity for an attacker to compromise the system with known vulnerabilities. FOXMOLE has also observed that the patch process does not address all layers, for example only the server patches are applied, but not the service-layer, the used frameworks or the applications are part of it.

Too often we see either a piecemeal approach that only addresses part of the network, or a reinvention of the wheel each time – as if a patch has never occurred before. With attacks more than likely to succeed at some point (however small), it is time to factor in how these would be remediated so minimize the chance of reoccurrence.

Insider threat often underestimated

In modern company culture that often stress (rightly) collaboration, assumption of best intent and HR/privacy guideline adherence, it can be hard to stress the need to factor in actions by a disgruntled employee. A Forrester Research report, “Understand the State of Data Security and Privacy,” showed that 25% of survey respondents the most common breach occurred in the past year at their company derived from abuse by a malicious insider. If that insider has privileged account access, the risk is particularly significant.

One failure FOXMOLE sees in this respect is a focus on policies and the main solution. Companies tend to protect against external threats;  they patch every external server-system (available from the internet) and do not do that for internal systems (same applies to hardening…). In the end, the important systems (which often are not available from the internet such as SAP, HR-Systems, Customer Analytics,…) are in a weak security state (default passwords on the databases, old patch levels…). This means that anyone with access to the local network (an insider, subcontractor) has a very soft target which enables them to steal the data. In addition, if employees can bring their own devices (subcontractors with own laptops) they normaly have administrative rights with them and can bring their own attack tools and have all the time to exploit systems and extricate data – since no corporate compliance tool will typically check these BYOD devices.

Poor password practices

This seems like an old “classic”, but these present issues in multiple ways. A recent study in Luxembourg revealed that over 40% of respondents would share their passwords in return for chocolate. The significance of handing over a password still seems not resonate. Sharing password for admin accounts may be convenient and time-saving but presents major risks. Another challenge is laziness in creating passwords themselves, with “123456” or “welcome” remaining popular and of course easily hackable choices. Whilst it is hard to remember a wealth of complex passwords in work and personal life, using “password” for example, is not the smartest idea.

Linked to this is the fact that few companies seem to enforce strong passwords, or do not store the passwords in a secure manner (bcypt, scrypt with salts). It is essential to combine strong password policies with frequent password change requirements that will decrease the selected passwords to avoid predictability! Recent research showed that 63% of confirmed data breaches involved weak, default or stolen passwords.

General awareness of security

This may seem like a catch-all topic, but it’s really just a simple mindset issue. It’s about taking care of the basics such as locking the desktop, vetting sub-contractors, challenging non-familiar faces, not allowing visitors to walk around the building unescorted and not leaving valuables in the office. One service FOXMOLE offers is the “evil cleaner”; which involves consultants spending five minutes in an employee’s office to see how much could be taken by regular office presence with bad intentions.

Adherence to manual approaches

In a app-driven world, it is still a shock to witness the lack of automating of security and the modeling of this all into all processes. Addressing human weaknesses such as errors, laziness, absence of a repeated and consistent approach through automation is essential as the type, volume and complexity of security threats increase. FOXMOLE has observed on multiple occasions an absence of a defined, transparent and robust security framework.

There are no doubt many other common failings – look out for some more observations in a future blog!

LSE announces a number of new product updates for multi-factor authentication

Germany-based LSE Leading Security Experts GmbH, a holding of MAX21 Management und Beteiligungen AG (stock market symbol: MA1, ISIN: DE000A0D88T9), will expand its family of adaptive multi-factor authentication products during the second quarter of 2016. Among other updates, an offline authentication facility will be gradually integrated into the product suite. Unlike conventional OTP tokens, this new approach enables strong authentication even without a direct connection to the LSE LinOTP server.

LSE LinOTP Offline Authentication

This cross-product feature will allow companies to provide mobile workers with a secure form of offline authentication. This is particularly relevant for employees who travel a lot, or who work abroad without a direct connection to the company’s network and thus the backend OTP server. “Previously, secure two-factor authentication methods with OTP were limited to devices with a permanent network connection. Now mobile devices such as notebook computers can also be protected with real and cryptographically valid multifactor authentication schemes,” says Sven Walther, Managing Director and CTO of LSE Leading Security Experts GmbH.

Unlike other solutions being marketed, the process developed by LSE does not require secret material to be stored on the system being authenticated. The feature will become available to customers during the second quarter of 2016 through update releases of LSE LinOTP, LSE LinOTP authentication providers, and the new LSE LinOTP multi-token app.

LSE LinOTP Multi-Token App: OATH compliant

The LSE LinOTP multi-token app is an integral component of the new LinOTP family of offline authentication products. In addition to the LSE LinOTP QR token, the multi-token app supports tokens for OATH TOTP and HOTP and is therefore compatible with all OATH-based systems (like Google, Dropbox, Github, and many others). Access to the app’s data is password-protected by default. Key data can be transmitted in conjunction with LinOTP in a separately protected secure roll-out process. Initially, this solution will be available for iOS and Android.

RPM packages for simplified installation of Red Hat-based systems

During the second quarter of 2016, LSE will provide its customers with LinOTP repositories containing RPM packages. This expands the support of packaged deployment to systems based on RHEL 7 and RHEL 6. The installation for Red Hat-based systems will be streamlined and allows faster deployment using various optimized configuration templates. The LSE LinOTP RPM packages for RHEL 6/7-based systems supplement the LSE range of packages for Debian “Jessie” 8, Ubuntu 12.04, and Ubuntu 14.04.

LSE LinOTP authentication provider for Microsoft Windows and OS X®

In the course of regular product updates, the family of LSE LinOTP authentication providers will expand to include the OS X® operating system in addition to the Microsoft Windows and Linux operating systems, and for the first time offer OS X® strong offline authentication with access to LSE LinOTP. The LSE LinOTP authentication provider for Microsoft Windows will be enhanced to allow a direct connection of the LinOTP API via encrypted channels based on HTTPS.

About LSE Leading Security Experts GmbH

Since its establishment in 2002, LSE Leading Security Experts GmbH, based in Darmstadt / Weiterstadt, has made a name for itself as a leading manufacturer in the field of login security and user authentication as well as a provider of consulting services in the security industry. Within the company there are two independent operating divisions: The first division specializes in adaptive multi-factor authentication (MFA/2FA) and the specially developed open-source LSE LinOTP technology, the second division provides penetration testing, vulnerability assessment and code review services. Customers of LSE include national and international corporate customers, financial institutions, government agencies, and small and medium-sized enterprises. LSE is a part of the listed MAX21 group of companies (MA1).