December 09, 2013
In Mike Small
In September 2013 the European Commission (EC) published the strategy to “to create single set of rules for cloud computing and increase EU GDP by €160 billion annually by 2020”. This strategy identified a number of key actions one of these being “Cutting through the Jungle of Standards”. Following a request from the European Commission, the European Telecommunications Standards Institute (ETSI) launched the Cloud Standards Coordination (CSC) initiative. In November 2013 ETSI published its final report from the CSC initiative. According to this report “the Cloud Standards landscape is complex but not chaotic and by no means a ‘jungle’”.
The final Report is based on an analysis of over 100 cloud computing use cases. It starts with a definition of roles and parties involved in cloud computing. The obvious roles of provider and customer are expanded to include a cloud service partner (who may act as a broker) as well as the government. Unsurprisingly the use cases involve three common activities: cloud service acquisition, cloud service use and cloud service termination. These activities are broken down into more detail for a number of specific use cases. The report identifies around 20 organizations involved in the standardization activities related to cloud computing and around 150 documents. However at the activity level it finds that seldom more than 2 standards are relevant to any activity.
The report concludes that emerging cloud specific standards are not seeing widespread adoption by cloud providers. It suggests that cloud standards need to be flexible enough to allow each provider’s unique selling points to be accommodated. The report identifies the following gaps:
Interoperability – this is a significant concern since vendor lock-in a risk for cloud customers. The report concludes that while management protocols and interfaces, particularly for IaaS, are maturing, management specifications for PaaS and SaaS require more effort. There are many proprietary and open source solutions, but very few, if any standards.
Security and Privacy – these are important areas of concern for cloud customers. According to the report there are existing security and privacy standards which are helpful in this area but further development of common vocabularies and metrics is needed. In addition there is a need for further standardization in the areas of accountability and cloud incident management (e.g., related to SLA infringements).
Service Level Agreement: the main requirement for standardization in relation to Service Level Agreements is the creation of an agreed set of terminology and definitions for Service Level Objectives, and an associated set of metrics for each of these. There is some on-going work in this area, but this needs to be completed and importantly to be adopted by public cloud service providers.
Regulation, Legal and Governance aspects – The legal environment for cloud computing is highly challenging and a key barrier for adoption. Given the global nature of the cloud and its potential to transcend international borders, there is a need for international Framework and Governance, underpinned via global standards.
The area of standards is important to cloud computing and standards will be the key to obtaining the benefits from this model for the delivery of IT services. In view of this KuppingerCole have undertaken a detailed study of cloud standards and we have identified the standards that are important to the various processes involved in the selection, use and assurance of cloud services from the perspective of a cloud customer. We have classified these standards in terms of the actions that a cloud customer needs to take. You can get an overview of this subject area from our recorded webcast: Negotiating the Cloud Standards and Advice Jungle. For a more detailed view join the workshop on this subject at EIC in Munich during May 2014.
December 07, 2013
One of the interesting aspects of the service model outlined in the "Component Identity Services" Section of the FICAM TFS Trust Framework Provider Adoption Process for All Levels of Assurance (PDF) are the roles, responsibilities and expectations of each of the components. This is especially critical within the context of identity federation when you are depending upon entities outside your security/business domain for critical identity capabilities.
BTW, I find the commonly used term identity provider, at best, imprecise and, at worst, misleading since as typically implemented they are often not one or the other. So let me start with some standard terminology from OMB M-04-04 E-Authentication Guidance for Federal Agencies (PDF) and NIST Electronic Authentication Guideline SP-800-63-2 (PDF):
- Identity: A set of attributes that uniquely describes a person within a given context
- Token: Something that the Claimant possesses and controls (typically a cryptographic module or password) that is used to authenticate the Claimant's identity
- Individual Authentication: The process of establishing an understood level of confidence that an identifier refers to a specific individual
- Level of Assurance (a.k.a Assurance Level of a Credential): Defined as (1) the degree of confidence in the vetting process used to establish the identity of an individual to whom the credential was issued, and (2) the degree of confidence that the individual who uses the credential is the individual to whom the credential was issued
All of this works brilliantly and seamlessly when the vetting and the secure binding of the identity to a token, and the assertion of that identity to a RP is done within a single security/business domain. The Relying Parties (RPs) within that domain have full access to the identifier as well as the set of attributes that uniquely describes a person.
The pieces get much more distributed within the context of a Federation.
First and foremost, the starting point of identity within a federation, in the majority of public sector online service use cases, is entirely driven by the need to uniquely describe a person to an RP so that someone from outside the RP's security/business domain can be mapped into the RP in order to deliver online services to them.
The assertion of an identifier by a CSP is not enough to resolve that person to a unique individual, especially within the US context where there is no single mandated identity-card/identifier that can used by the RP to "look up" the identity (i.e. the set of attributes that uniquely describes a person) for resolution.
Identity in a federation is ultimately in the eye of the RP, so a CSP that is able to convey nothing more than an identifier is not providing enough information to the RP to allow it to do identity resolution. And while they may indeed play the role of a CSP within an Enterprise/White-label-scenario or a single security/business domain, without the ability to provide identity attributes that allow for resolution in a federation environment, they are asserting a Token and not a Credential i.e. They are a Token Manager.
These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer
December 06, 2013
This post originally appeared on GoodCode and is reposted here with permission.
How do you manage user accounts and permissions in a complex web-based system? What if you have dozens of separate components, apps or sites, but need to have unified credentials for users across all of those components?
Stormpath is a company offering solution to this problem, in a form of a cloud-based API for user management. If you’re familliar with Active Directory or LDAP, this is something similar. Stormpath offers JSON-based REST API for managing all aspects of user identification and authorization: users, groups, user directories, application mappings, and so on.
While the REST API that they provide is really clean and very well documented, they also recognize that most of developers would just prefer to use a library or module in their preferred language or framework, and not worry about the actual on-the-wire API at all.
This is why some time ago we were approached by the folks at Stormpath, and they asked us to improve the existing Python SDK they had. We loved the idea and together with them started to devise a better Python API for the Stormpath service.
In devising the API and writing the implementation, we had several priorities:
- the API wrapper should feel Pythonic and natural for developers used to Python idioms,
- we didn’t want to hardcode too much, instead opting to be able to quickly adapt and modify the Python module when the REST API changes in the future,
- we avoided doing too much cool magic that would make it harder to maintain,
- wanted to build a full automated testing suite with both unit and integration tests.
We worked closely with people at Stormpath in achieveing these goals and I can honestly say the end result is one of the finest Python code craftmanship I’ve seen. Don’t trust me on this – Python SDK is open source and hosted on GitHub (and available as beta on PyPI). Of course, a good SDK is not just the code. It is fully documented, both on the API level (docstrings) and in the manuals (quickstart and the product guide), and has extensive test suite (with the coverage north of 90%): unit tests, integration tests with mocked remote service, and live end-to-end tests hitting the actual Stormpath REST API.
While we like to brag, this wouldn’t be possible without folks over at Stormpath weren’t as brilliant in designing their API in the first place, and supportive and receptive to our ideas as they were. Their product not only looks good on paper, but getting to know the people behind it in person, and seeing it in action, I know I’d trust my apps’ user credentials with them. So if you are in a position where separating out user management from your app is a good idea, check out Stormpath.
And if you happen to use Python, we’ve got you covered.
Deep dives into technology & architectures: The Identity & Access Management Experts Day is the place, where you meet with Identity & Access Management experts for in-depth discussion on the future of Identity Management, Cloud Computing and Information Security.more
Cloud Computing, Mobile Computing and Social Computing - each of these trends have been around for some time. But what we see now, is the convergence of those forces, creating strong new business opportunities and changing the way we use information technology to interact with our customers and to run our enterprises. It is all about the shift of control into the hands of users, far beyond of what we used to call consumerization. Identity and access is the key element in this paradigm shift...more
Managing and governing access to systems and information, both on-premise and in the cloud, needs to be well architected to embrace and extend existing building blocks and help organizations moving forward towards a more flexible, future-proof IT infrastructure. Join KuppingerCole APAC in this Breakfast Debate to find out how to best move from old school, prohibition based security to trust in access control.more
In Mike Small
According to IBM a consistent way to manage all types of risk is the key to success for financial services organizations. To support this IBM will be rolling out their Smarter Risk offering during Q1 2014. Failure to properly manage risk has been alleged to be the cause of the financial crisis and, to force financial services organizations to better manage risk, the regulators around the world are introducing tougher rules.
The underlying causes of the damaging financial crisis can be traced back to the management of risk. Financial services organizations need to hold capital to protect against the various forms of risk. The more capital they have to hold to cover existing risks the less the opportunity to use that capital in other ways. So fully understanding the risks faced is a key factor to organizational success.
According to Gillian Tett in her book Fool’s Gold – the roots of the financial crisis can be traced back to the Exxon Valdez disaster in 1993. To cover the billions of dollars needed for the clean-up Exxon requested a credit line from its bankers J.P. Morgan and Barclays. The capital needed to cover this enormous credit line required the banks to set aside large amounts of capital. In order to release this capital J.P. Morgan found a way to sell the credit risk to the European Bank for Reconstruction and Development. This was one of the earliest credit default swaps and, while this particular one was perfectly understood by all parties, these types of derivatives evolved into things like synthetic collateralized debt obligations (CDOs) which were not properly understood and were to prove to be the undoing.
IBM believes that, in order to better manage risk, financial services organizations need to manage all forms of risk in a consistent way since they all contribute to the ultimate outcome for the business. These include financial risk, operational risk, fraud and financial crimes, as well as IT security. The approach they advise is to build trust through better and more timely intelligence, then to create value by taking a holistic view across all the different forms of risk. The measurement of risks is a complex process and involves many steps based on many sources of data. Often a problem that is detected at a lower level is not properly understood at a higher level or is lost in the noise. Incorrect priorities may be assigned to different kinds of risk or the relative value of different kinds of intelligence may be misjudged.
So how does this relate to IT security? Well security is about ensuring the confidentiality, integrity and availability of information. In this last week the UK bank RBS suffered a serious outage which led to its customers’ payment cards being declined over a period of several hours. The reasons for this have not been published but the reputational damage must be great since this is the latest in a series of externally visible IT problems suffered by the bank. IBM provided an example of how they had used a prototype Predictive Outage Analytics tool on a banking application. This banking application suffered 10 outages, each requiring over 40 minutes recovery time, over a period of 4 weeks. Analysing the system monitoring and performance data the IBM team were able to show that these outages could have been predicted well in advance and the costs and reputational damage could have been avoided if appropriate action had been taken sooner.
So in conclusion this is an interesting initiative from IBM. It is not the first time that IT companies have told their customers that they need to take a holistic view to manage risk and that IT risk is important to the business. However, as a consequence of the financial crisis, the financial services industry is now subject to a tightening screw of regulation around the management of risk. Under these circumstances, tools that can help these organizations to understand, explain and justify their treatment of risks are likely to be welcomed. This holistic approach to the management of risk is not limited to financial organizations and many other kinds of organization could also benefit. In particular, with the increasing dependence upon cloud computing and the impact of social and mobile on the business, the impact of IT risk has become a very real business issue and needs to be treated as such.
December 05, 2013
I have been working on a solution for a healthcare SaaS provider for a “reverse proxy” to help them migrate from a home grown web access management solution. The driver for the integration was supporting an important customer who required SAML authentication. However, SAML was not enough. The SaaS provider used the proxy as the policy enforcement point to ensure data privacy for their multi-tenant system. So the reverse proxy had to enforce URL access control, not just enforce that all users were authenticated.
I admit, it seems a little weird to use SAML and UMA together.
One of my questions about this solution is how the OX AS will get “user claims” (“attributes” in SAML jargon). I think the UMA AS will need to use the SAML token. In OX, you may have to consider using a Python or Java SAML API in the code for the authorize interception script. The authorization script could also handle “enrollment” of new users for “just-in-time” provisioning.
Access Risk Management Blog | Courion
On Tuesday December 10th at 11:00 a.m. Eastern, Nick Taylor, Senior Manager for Enterprise Risk Services at Deloitte will be joining us for a webinar titled, “Does Regulatory and Compliance Activity Actually Reduce Identity and Access Risk, or Is It a Rubber Stamp Exercise?” It’s sure to be an interesting conversation, and is a convenient way to earn Continuing Professional Education credits (CPEs) towards your CISSP certification.
Click here to register now.
The top audit issues from years ago are still today’s top audit issues – excessive access rights, removal of access after termination and lack of sufficient segregation of duties. Kind of makes you wonder why we bother preparing for and (hopefully) passing audits, given that breaches are becoming increasingly commonplace.
So does regulatory and compliance activity actually reduce risk? Join our panel to discuss:
- Providing least-privileged user access in an ever-changing environment
- Maintaining continuous compliance to get ahead of the audit
- Leveraging big identity and access data to uncover threats
If you register now and login on December 10th at 11:00, you’ll be eligible to receive CPE credit towards your Certified Information Security Systems Professional (CISSP) certification.
Yesterday, I participated in an interesting discussion about the tension between a desire to keep data around for an extended period of time versus purging it quickly. On one hand, some people wanted to keep old email accounts active for an extended period of time, just in case an old email message might be needed. On the other hand, the IT folks wanted to quickly purge old information that might not be needed to meet specific legal requirements.
It reminded me of an experience early in my married life. I had been out of college for a few years, but still kept a number of old text books in my office at work. Ever the packrat, I thought that surely these old books would be of some good use to me in the future. However, running out of space in my office, I brought a stack of the books home.
I took the books out of my car and temporarily stacked them in the garage while I considered where to keep them. After a few days, in keeping with her best de-junker instincts, my wife assumed I had planned to get rid of them, and donated the whole pile to a thrift store whose truck came through the neighborhood.
I was a bit miffed when I found out what happened, but my wife gently reminded me that I would probably never miss the books. You know what? She was absolutely right. I never once missed the books, and life was a bit simpler because I didn’t have to store that unneeded stuff.
In the years since then, my packrat tendencies are nicely balanced by Claudia’s de-junking mentality. She still has to remind me from time to time that I keep too much stuff around. But she humors me by letting me maintain my little personal “museum” of old stuff. And every once in a while, I put some bit of that old hoarded stuff to good use.
The moral of this story? I’m not sure. But it was a nice memory.
December 04, 2013
In KuppingerCole Podcasts
Many organizations have started their journey into the world of IAM several years ago.
In Martin Kuppinger
In various discussions over the past month, mainly in the context of Privilege Management, I raised the (somewhat provocative) claim that shared accounts are a bad thing per se and that we must avoid these accounts. The counterargument I got, though, was that sometimes it is just impossible to do so.
There were various examples. One is that users in production environments need a functional account to quickly access PCs and perform some tasks. Another is that such technical user accounts are required when building n-tier applications to, for instance, access databases. Administrators commonly tend to groan when approaches for avoiding the use of shared accounts such as root are considered.
There are many more examples, but when you look at reality there are sufficient examples and reasons of how it is possible to avoid shared accounts (or at least their use). In many healthcare environments, fast user switching has been used for years now. The strict regulations in this sector frequently have led to implementing Enterprise Single Sign-On tools that allow for rapid authentication and access to applications with an individual account. These solutions frequently have replaced previously used shared functional accounts. So why shouldn’t they work in other environments as well?
When looking at n-tier applications, it is worth it to dive somewhat deeper into end-to-end security. There are many ways to implement end-to-end security. Standards such as OAuth 2.0 make it far easier to implement such concepts. Provisioning tools have supported database systems and other systems for a number of years. Oracle has just “re-invented” database security in its Oracle Database 12c, with tight integration into IAM (Identity and Access Management). Aside from the argument that end-to-end security just does not work (which is wrong), I sometimes hear the argument that this is too complex to do. I don’t think so. It is different to do. It requires a well-thought-out Application Security Infrastructure, something I was writing about years ago. It requires changing the way software architecture and software development are done. But in many, many cases technical accounts are primarily used due to convenience reasons – architects and developers just do not want to consider alternative solutions. And then there always is the “killer argument” of time to market, which is not necessarily valid.
When I look at administrators, I know about many scenarios where root or Windows Administrator accounts are rarely used, except for firefighting operations. The administrators and operators instead rely on functionally restricted, personal accounts they use aside of their other personal accounts they use for standard operations such as eMail access. That works well and it does not hinder them from doing a good job in administration and operations. But it requires thoroughly thinking about the concept for these accounts.
So there are many good reasons to get rid of shared accounts, but few, if any, valid ones to continue using them. Given that these accounts are amongst the single biggest security risks, it is worth starting to rethink their use and openly consider alternative solutions. Privilege Management tools are just helping with the symptoms. It is time to start addressing the cause of this security risk.
Have a look at our KuppingerCole reports. We will publish a new Leadership Compass on Privilege Management soon. Given that shared accounts are a reality and will not disappear quickly, you might need a tool to better secure these. Have a look at the new report, which will help you selecting the right vendor for your challenges.
Welcome to the Executive Director’s Corner. We verify Trusted ID systems actors, build markets, enable communities, influence stakeholders, and give our members competitive industry visibility. We share one common goal: to collaborate to develop and operate services that build markets that enable use of high-value trusted identity credentials.
As Executive Director (ED) of Kantara Initiative, I get a full overview of the activities and achievements of the organization and our Members. Here are some of the recent highlights.
Items of Interest
- Members: Radiant Logic joined the Kantara Initiative Board of Trustees in November. Radiant Logic aligns strongly with the Identity Relationship Management Pillars to help drive “contextual identity” solutions. In 2013 the Kantara Initiative Board of Trustees doubled in size.
- Federal Cloud Credential Exchange (FCCX): The US Government is launching an Identity Credential Exchange that is known as FCCX. The USPS is the authority managing a contract for implementation and operations of this service that has been awarded to SecureKey – a member of Kantara Initiative. Kantara Initiative is the premiere organization that will provide policy and technology interoperability verification for Identity Provider organizations who seek to be eligible to connect to FCCX. Kantara Initiative performs this function via its approval as a Trust Framework Provider for the US Federal Identity Credential Access Management (FICAM) team.
- NSTIC: Kantara is working with NIST to produce a “NSTIC Pilots in Motion” industry day on January 30th in Washington DC hosted by the Department of Commerce. Kantara Initiative is participating in various roles in 5 of the National Strategy for Trusted Identity in Cyberspace (NSTIC) pilot initiatives. Kantara Initiative touches at least 5 of the NSTIC pilot activities further illustrating Kantara Initiative’s central role with in industry development.
- Assurance Approval Registered Applicants: TrustX has been accepted as a registered applicant signifying their progress working through Kantara Initiative Service Approval with US Federal Additional Criteria applied
- In the Pipeline: There are 5 organizations in the pipeline for Kantara Initiative approval.
- NSTIC: Katnara initiative is operationalizing its liaison with the Identity Ecosystem Steering Group in alignment with the NSTIC principles.
- DirectTrust/EHNAC: Kantara Initiative leaders continue to develop a joint feasibility study toward recommendations focused on harmonization of certification / approval programs with in the Identity Management space.
- NSTIC Resilient Network: Kantara Initiative Identity Assurance Work Group is advancing its role in the Resilient Network NSTIC pilot toward the development of assessment criteria in alignment with the Resilient Network architecture.
- Workshops and Events: Event planning is under development for 2014 to create a field-marketing plan. Events are being produced through July of 2014 including: RSA, EIC, NIST aligned Industry Roundtables, CIS and more…
- Webinar: Kantara Initiative was pleased to participate in an IEEE-SA produced webinar focused on issues and benefits of Mobility. The recording should be available soon.
- Industry Events Participation: Kantara Initiative participated in the following events in October
- Date: November
What: Identity North
Where: Vancouver, BC
Who: Industry Workshop with Canadian and International stakeholders
- Date: November
What: TSCP Symposium
Where: Washington DC
Who: Kantara Initiative Executive Director
- November 26, 2013 – 2014 Events on the Horizon
As we move to close 2013, we’re already excited by all of the opportunities for 2014. Planning is underway on some exciting initiatives and great events for 2014! A taste of the details is below. The following events are confirmed for 2014 NSTIC Pilots in Motion – Jan 30: This is a special event produced more…
- November 21, 2013 – Snapshot: SAML IOP Past, Present, and Future
Our guest blogger today is Kantara Member Rainer Hörbe. Rainer has been a contributor, architect and standards editor for the Austrian eGovernment federation. In the European cross-border eHealth federation project epSOS he served as security policy adviser. As a Member of Kantara Initiative, OASIS, and ISO SC27 he is engaged in developing models and standards more…
- Date: Dec 9-13
What: OECD ITAC
Where: Paris, France
Who: Nat Sakimura
December 03, 2013
We are currently evaluating the idea of incorporating the Asimba SAML platform on the Gluu Server (in addition to Shibboleth). SAML can be confusing, even to the experts. I worked on the diagram below as a simple overview of why a SAML proxy might be useful, and where it would fit in the Gluu open source stack.
A few things to note:
- The main advantage of the proxy is a very simple configuration for the SP. If the website is a SaaS or off-the-shelf software, you may only get one way to trust the IDP. Discovery and re-direction to your respective home domain IDP are handled by the proxy.
- Internal websites that don’t care about other federated IDPs can just point to your SAML IDP directly.
- Applications using the Asimba proxy can request a specific authentication type via SAML ACR request.
- Authentication business logic is handled in OX–no need to support 2FA in both SAML and OAuth2.
- In many cases, the OX OP also grabs a legacy SSO ticket (i.e. CAS, Siteminder, etc.)
- In a federation with many IDPs, if the participants trust the federation operator, it is efficient for the federation operator to manage trust with the websites. For example, instead of updating 1,000 IDPs to update their configuration, just update the proxy.
KuppingerCole´s Identity, Cloud Risk & Information Security Summit is a highly interactive event offering the opportunity to you as an IT professional to discuss with your peers and with KuppingerCole Analysts about your most challenging topics and questions in a discrete environment - moderated by teams of practitioners and analysts. IRS is a dialog based event, not a speaker - delegate event. It consists of a series of dialogs around the key topics for a holistic view on your enterprise...more
Information security in general and identity management in particular have become a critical, more and more sophisticated, and costly component for almost every online service. Developers must either invest a lot of effort to implement and maintain it or integrate a third party solution. Currently, the market for such solutions is very large and mature, but solutions from traditional vendors like Oracle, Microsoft or IBM are usually prohibitively expensive for smaller businesses and...
Radiant Logic joins Kantara Initiative Board of Trustees to drive contextual Identity Management as a business enabler.
December 3rd, 2013, Piscataway, NJ – Kantara Initiative, a global identity initiative community, announced today that Radiant Logic has joined the Kantara Initiative Board of Trustees. Radiant Logic joins industry innovators on the Kantara Board Members from CA, Experian, ForgeRock, Internet Society, Nomura Research Institute, Terena and Government of Canada.
Radiant Logic aligns with Kantara Initiative’s Identity Relationship Management Pillars. The IRM pillars are a set of business and technical values that are transforming the language of Identity Management (IdM) in to the language of business. The pillars of IRM enable organizations to move IdM beyond the IT department and in to the Business Development team where IdM is a business enabler driving new lines of business and revenue opportunities.
“We are very happy to join The Kantara Initiative community, and champion Identity Relationship Management,” said Michel Prompt, CEO at Radiant Logic. “Whether it’s identity and security, identity and the internet of things, identity and context, or identity and privacy, we want to be sure that we are focusing on the right thing on all levels of the infrastructure.”
Kantara Initiative influences transformative innovation within Identity Management. Kantara Initiative helps organizations to establish verified trust in use of digital identities for business and government services by collaborating with industry to develop and operate Trust and Interoperability Frameworks and Programs with global focus and application.
“We’re pleased to welcome Radiant Logic as a Kantara Initiative Trustee and an industry leader demonstrating that Identity Management is not only a security issue, it’s a solid business enabler,” said Joni Brennan, Executive Director, Kantara Initiative.
About Radiant Logic: As the market leader for identity virtualization, Radiant Logic delivers simple, logical, and standards – based access to all identity within an organization. The RadiantOne federated identity service enables customizable identity views built from disparate data silos, driving critical authentication and authorization decisions for WAM, federation, and cloud deployments. Fortune 1000 companies rely on RadiantOne to deliver quick ROI by reducing administrative effort, simplifying integration, and building a flexible infrastructure to meet changing business demands.
About Kantara Initiative:
Kantara Initiative is an industry and community non-profit organization enabling trust in identity services through our compliance programs, requirements development, and information sharing among communities including: industry, research & education, government agencies and international stakeholders.
In Dave Kearns
In my last post (“Dogged Determination”) I briefly mentioned the FIDO alliance (Fast Identity Online) with the promise to take a closer look at the emerging internet password-replacing-authentication system this time. So I will.
But first, an aside. It’s quite possible that the alliance chose the acronym “FIDO” first, then found words to fit the letters. Fido, at least in the US, is a generic name for a dog which came into general use in the mid 19th century when President Abraham Lincoln named his favorite dog Fido. Choosing a word associated with dogs harkens back to the internet meme “On the internet nobody knows you’re a dog”. With the FIDO system, no one except those you intended would know who you are. That’s my theory and I’m sticking to it.
FIDO was in the news last week when it was announced that Fingerprint Cards (FPC) and Nok Nok Labs had announced an infrastructure solution for strong and simple online authentication using fingerprint sensors on smartphones and tablets. The two companies have initially implemented the joint solution utilizing Nok Nok Labs’ client and server technology and commercially available Android smartphones using the FPC1080 fingerprint sensor in order to demonstrate readiness to support the emerging FIDO-based ecosystem.
That should give you an idea of the thrust of the Alliance.
The FIDO system doesn’t require a biometric component, but it appears to be highly recommended. From the Alliance’s literature:
“The FIDO protocols use standard public key cryptography techniques to provide stronger authentication. During registration with an online service, the user’s client device creates a new key pair. It retains the private key and registers the public key with the online service. Authentication is done by the client device proving possession of the private key to the service by signing a challenge. The client’s private keys can be used only after they are unlocked locally on the device by the user. The local unlock is accomplished by a user–friendly and secure action such as swiping a finger, entering a PIN, speaking into a microphone, inserting a second–factor device or pressing a button.
The FIDO protocols are designed from the ground up to protect user privacy. The protocols do not provide information that can be used by different online services to collaborate and track a user across the services. Biometric information, if used, never leaves the user’s device.”
FIDO is, first and foremost, about strong authentication. Two-factor authentication is a requirement. A biometric component (fingerprint, voiceprint, etc.) is highly recommended.
President of the Alliance is Michael Barrett, formerly CISO for PayPal, formerly president of the Liberty Alliance and before that VP, Security & Privacy Strategy for American Express. Interestingly, the VP of FIDO is Brett McDowell, currently Head of Ecosystem Security at PayPal, who was previously Executive Director of the Liberty Alliance and its successor, the Kantara Initiative. He also served as Management Council chair of the USA’s NSTIC (National Strategy for Trusted Identities in Cyberspace) Identity Ecosystem Steering Group. These are two guys who know identity systems inside out.
PayPal (which is always looking for stronger authentication methods) and Nok Nok Labs (which is always looking for better ways to use biometrics as well as strong authentication) were two of the founders of the alliance which has now grown to over 50 members including such big names as Google, Blackberry, Lenovo, MasterCard and Yubico as well as just about everyone in the biometric device space.
It’s a good cast of characters, but is that enough?
The impact of so many biometric friendly members means that the Alliance has to first answer (again) all the questions about the “problems” with biometric authentication. Now, if you know me at all you know that “I ♥ Biometrics” but getting others to like them is an uphill battle. In fact, the continuous (I’ve been involved in it for 15 years!) argument about the security of passwords is really a side issue for the FIDO Alliance. More important, I think, is its reliance on the Online Secure Transaction Protocol (OSTP).
OSTP is a protocol designed and issued by FIDO (they say they will turn it over to a public standards body once it is fully “baked”). It’s explained in a white paper (“The Evolution of Authentication,” this is a PDF file) where it’s generally referred to as the “FIDO protocol”. The heart of the system is the FIDO authenticator which the white paper explains:
“The FIDO Authenticator is a concept. It might be implemented as a software compo-nent running on the FIDO User Device, it might be implemented as a dedicated hard-ware token (e.g. smart card or USB crypto device), it might be implemented as soft-ware leveraging cryptographic capabilities of TPMs or Secure Elements or it might even be implemented as software running inside a Trusted Execution Environment.
The User Authentication method could leverage any hardware support available on the FIDO User Device, e.g. Microphones (Speaker Recognition), Cameras (Face Recognition), Fingerprint Sensors, or behavioral biometrics, see (M. S. Obaidat) (BehavioSec, 2009).”
As I said, biometrics strongly recommended.
Read the paper for more details of how it works.
Can the FIDO proposal succeed? Yes, it’s a well thought-out system that does provide strong authentication with a high degree of confidence that the user is who they claim to be.
Will the FIDO proposal succeed? That’s much more problematic. It requires that relying parties and Identity Providers (which can be the same entity) install specific server software and that users install specific client software. The client part could be an easier “sell” if it comes along with the biometric devices and services that FIDO members provide. Easier, certainly, in a smartphone environment, less so in a desktop/browser environment. History says that anything requiring users voluntarily install something or requiring relying parties to buy, install and maintain single purpose services is a long shot. And the FIDO solution requires both. Still, if the members of the FIDO alliance provide the software and compel their clients to install it a tipping point could be reached. If so, I’d applaud it.
I will note that a number of my colleagues believe I’m reading too much into the so-called “biometric requirements” of FIDO, noting that hardware tokens (represented by Yubico and other members) are an even easier implementation since most modern smartphones can handle a microSD card, which could act as a hardware token – or, at least, turn the phone into a hardware token. It would be protected by a PIN, which users are familiar with entering for all sorts of services.
While I do agree with all that, the typical PIN is 4 digits so there are 10,000 possible combinations (0000 to 9999). That’s not strong enough for my taste. Brute force manual entry could try all possibilities within a few minutes, and – since some combinations (1234, 1111, 1379, 1397, etc.) are more popular than others it could be only a few seconds before the code is broken. Nevertheless, if this would increase the uptake in using the FIDO system, I’d be behind it – at least as a good beginning.
In Martin Kuppinger
Last week, the German BSI (Bundesamt für Sicherheit in der Informationstechnik, the Federal Office for IT Security), published a document named “ICS-Security-Kompendium”. ICS stands for “Industrial Control Systems”. This is the first comprehensive advisory document published by the German BSI on this topic so far. The BSI puts specific emphasis on two facts:
- ICS are widely used in critical infrastructures, e.g. utilities, transport, traffic control, etc.
- ICS are increasingly connected – there is no “air gap” anymore for many of these systems
It is definitely worth having a look at the document, because it provides an in-depth analysis of security risks, best practices for securing such infrastructures, and a methodology for ICS audits. Furthermore it has a chapter on upcoming trends such as the impact of the IoT (Internet of Things) and the so-called “Industry 4.0” and of Cloud architectures in industrial environments. Industry 4.0 stands for the 4th industrial revolution, where factories are organizing themselves – the factory of the future.
As much as I appreciate such publication, it lacks – from my perspective – an additional view of two major areas that are tightly connected to ICS security:
- Aside from the ICS systems, there is a lot more of IT in manufacturing environments that frequently is not in scope with the corporate IT Security and Information Security departments. Aside from attacks to such systems, for instance in the area of PLM/PDM (Product Lifecycle/Data Management), there are standard PCs that might serve as entry point for attacks.
- This directly leads to the second aspect: It is not only about technical security, but about re-thinking the organizational approach to Information Security in all areas within an organization, i.e. a holistic view on all IT and information. Separating ICS and manufacturing IT from the “business IT” does not make sense.
The latter becomes clear when looking at new business cases such as the connected vehicle, smart metering, or simply remote control of HVAC (heating, ventilation, and air conditioning) and other systems in households (or industry). In all these scenarios, there are new business cases that lead to connecting both sides of IT.
Also have a look at our KuppingerCole research on these issues, such as the KuppingerCole report on critical infrastructures in finance industry (not about iCS) and the KuppingerCole report on managing risks to critical infrastructure.
While we here in the United States were recovering from our turkey induced malaise or out battling the crowds on Black Friday, my colleague Holger Reinhardt, located in Germany, was sharing his thoughts on Internet of Things protocols. If you...
December 02, 2013
This interview originally appeared on Planet Cassandra and is reposted here with permission.
Les Hazlewood: CTO and Co-Founder at Stormpath
Matt Pfeil: Founder at DataStax
TL;DR: Stormpath is a user management API for developers, that handles: identity management, user management, and security for applications.
Stormpath supports millions of accounts, and all of the statistics, data and analytics around those accounts created a need for a data store that could handle extreme load and scale. Alongside scale, Les required high availability, as Stormpath could “never, ever go down”, and for that they deployed Cassandra.
Stormpath shifted off of MySQL to Cassandra cutting import time for their customers from 5 days to merely hours. Their deployment is entirely in the Amazon cloud across multiple datacenters. Depending on what they're doing, they have a replication factor of three to five, with a minimum of five nodes deployed at all times for their Cassandra cluster.
Hello, Planet Cassandra listeners. This is Matt Pfeil. Today I'm joined by Les Hazlewood from Stormpath. Les, thanks for taking some time today to talk about your Apache Cassandra use case.
Why don't we start things off by telling everyone a little bit about yourself and what Stormpath does.
Sure. Again, my name is Les Hazlewood. I'm the CTO and Co-Founder of Stormpath. Stormpath is a user management API for developers. We're fundamentally a REST+JSON API hosted in the cloud, and we handle identity management, user management, and security for applications.
Very cool. What's the use case for Cassandra?
For us, we have to support hundreds of thousands, millions of accounts across multiple different directories from different customers around the world. So, as you get into millions of accounts, millions of records, and all of the statistics, data and analytics around those records, we needed a data store that could handle the extreme load and scale, and could linearly scale as we grow as a startup.
Cassandra was a perfect choice for us because it has no single point of failure. There's no master and it's a pure distributed replicated database. Because of the nature of our business, we process authentication attempts for hundreds of thousands, if not millions, of accounts around the world - we can never, ever go down. One of our primary concerns is high availability, and Cassandra's full tolerant distributed architecture helps us guarantee that for our customers.
Since you're talking about how important uptime is, can you talk a little bit about what your infrastructure looks like? Are you running in the cloud multiple data centers?
Yeah, so we are across multiple data centers. We're hosted on Amazon, and we're 100% Amazon-backed, currently. We'll probably expand into other data centers, like Rackspace and others, soon enough. We're across all the east coast zones, and depending on what we're doing, we have a replication factor of three to five, and so we have a minimum of five nodes deployed at all times for our Cassandra cluster. We’re running m2.4xl instances to handle the horsepower.
Once we warrant or justify the load to move up to the SSD-based machines, we will. So far, we're perfectly fine with those five machines at the moment.
So, obviously uptime is of high importance. Is your dataset large, or is it primarily the ability to have the data in many locations close to the end user that's more of a driver for you?
Both, actually. We need to be able to tolerate up to a significant number of machines dying to still be able to process an authentication attempt. We have to make sure that the data is always available to us. That's really important, but in addition to that, we're rolling out features all the time that require a lot of data, a lot of load to be put on the servers. For example, time series data on what authentications succeed, what authentications fail over time, how many users are using a particular application at any point in time, events that users take while they're using applications. This is all very heavily time series-based data, so we can report and give charts and analytics based on user actions and user behavior to our customers.
The quantity of data for us is significantly increasing. Additionally, we also have a very interesting use case for our own product in that we have an on-premise agent that interacts with Active Directory and LDAP installations, and then mirrors that data securely into the cloud. For certain LDAP or AD installations, there could be multiple hundreds of thousands, if not millions, of account records that need to be transferred to us in the cloud.
To do that efficiently and process that information quickly, it's very hard to do with, say, relational database technologies. We can actually pump all that data into Cassandra as soon as it comes in to our infrastructure. Then we can use Cassandra techniques, like pagination and storing certain results per number of Cassandra rows. That allows us to chunk up the data very quickly, very efficiently, and we can process it very, very quickly.
Our recent tests had us pumping in 50,000 accounts in under 200 milliseconds, which is ridiculous compared to other technologies out there. I think there are other platforms, say Google, that has similar technology. Their import might take customers four or five days to a week, whereas because of Cassandra, we could probably do that in the order of a couple hours.
That's amazing. Talking about other technologies, did you start out on Cassandra? Or did you migrate off of another solution?
We migrated off. There are certain things that we need to support that do require ACID transactions, and our team was basically a traditional spring hibernate shop, running on a HA MySQL pair. We had a relational database, and that's what the product initially started out with. We had fairly large instances for replication, but we knew that as we grew as a startup and started getting much bigger customers, that wouldn't scale. So we shifted over to Cassandra recently, and we still have transactional things that are running on our MySQL cluster, but all of the new functionality we're rolling out is all based on Cassandra.
Great. So one last question for you: What's your favorite feature that's come out in Cassandra over the iterations?
I'm really, really looking forward to the lightweight transactions. We haven't really been able to leverage those just yet. We also think CQL has been a nice feature. There were a couple things that it lacked in the earlier days, but the DataStax team has really done a great job in filling those gaps, so it's really nice and mature now. That helps some people that are moving from a relational database world into the Cassandra world a little bit, it's a little bit better for them, a little easier for them to migrate. That's been beneficial. We also like Thrift too, so it’s nice to be able to choose one or the other depending on needs.
We also really appreciate Cassandra virtual node capability, and we have basically one systems engineer that maintains and manages our Cassandra clusters, and he does it with no problems. He's got the stuff automated via Chef and using virtual nodes. The fact that we only need one guy to do this speaks a lot to Cassandra's scale and capability from a hands-off perspective. It's been really good for us from an ops side as well.
Les, I want to thank you for your time today. Is there anything else you'd like to share about the future of Stormpath, or anything with the community?
One of the things that our customers had been screaming for is this notion to be able to supply ad hoc data to Stormpath. Whenever somebody creates a group or an account within Stormpath for their application, they want to be able to attach any data that they want that's specific to their application.
We're rolling that out right now, and we couldn't have done that without Cassandra because we needed a schema-less data store that could scale with huge quantities of data. We feel that Cassandra was the best option for us to roll that new feature out, and we're seeing ad hoc data supplied directly by end users able to be persistent at scale with Cassandra. We don't think there would have been an easy way to roll that out otherwise. Our most requested feature by customers is now directly backed by Cassandra, and it has been a great experience for us.
Stormpath is hiring!
Its hard to make accurate predictions about adoption for SSO protocols. Its impossible to make a detailed model when the known inputs are so vast. With that inherent disclaimer about the difficulty of forecasting, the following graph represents Gluu’s view about the likely adoption and un-adoption of three very important web authentication standards: SAML, CAS, and OAuth2 (specifically OpenID Connect).
It makes sense to start any conversation about web authentication standards with the grand-daddy of Web SSO, the Security Assertion Markup Language–SAML. This is the current leading standard for enterprise inter-domain authentication. It is widely supported by off-the-shelf software, and major SaaS vendors like Google, SalesForce, WorkDay, Box, Amazon, and many others. SAML is the basis for extensive B2B, government and educational networks around the globe. Gluu’s prediction is that providing SAML endpoints and services will be critical for domains for years to come. In the next 15 years or so, organizations will look to consolidate on OAuth2 based trust networks, and will look to end-of-life and de-commission SAML relationships.
The “Central Authentication Server” defined one of the first Web SSO protocols. Its a simple to use API, and supported by several open CMS platforms. Backed by LDAP, it was a good choice for many organizations to centralize username / password authentication. It also allowed access control based on network address, to restrict which servers can use the enterprise web authentication service. With the availability of newer, more functional authentication standards, like SAML and OpenID Connect, new applications should be directed away from CAS. Older applications should also be asked to upgrade to one of the newer protocols. CAS was great, but there are better options now.
OpenID Connect is a profile of OAuth2 that provides several services related to authentication. In years past, federation experts thought OpenID would be ubiquitous. Then a smaller subset of federation experts thought OpenID 2 would be ubiquitous. However, the community has coalesced, and now a large group of federation experts are predicting that OpenID Connect will become ubiquitous. Its a risky position, but it holds up when you look at some simple indicators:
- Support of large consumer IDPs: Google, Microsoft, Yahoo probably Facebook
- Consolidation of several protocol communities such as OpenID, Oauth2, WS-*, a subset of the SAML community.
- Move in consumer market to JSON/REST Authentication API’s
- Explosion of mobile applications requiring better authentication API’s for non-web interactions
- Expanded role of a “client” acting as an agent of the Person to access Web APIs
- New standards that are building on OpenID Connect authentication, such as UMA and the new OpenID Connect Native SSO working group.
- Even Scott Cantor has acknowledged at InCommon Camp that Shibboleth 3.0 is being designed to make it easier to support OpenID Connect in the future!
So we’re going out on a limb here… and predict that OpenID Connect is actually going to catch on this time. We are also perhaps going to help our own cause by providing a scalable, production quality open source implementation of OpenID Connect: oxAuth.
If anyone disagrees or agrees with the admittedly arbitrarily drawn graphs above, feel free to comment below!
Welcome back to part three in my latest blog series about bringing identity back to IAM. And when I say bringing identity back, I do not mean just to the periphery—at Radiant, we believe identity should be right at the center of your organization. That’s right, it’s time for the hub discussion. Now, we’ve had this talk a time or two (or ten), looking at the idea that the complexity in today’s infrastructures demands a sort of logical center, a place where identities can be federated, rationalized, and transformed according to the unique requirements of each service provider or application.
If you remember my last post, we looked at metadirectories as our last attempt to centralize identities—and we know how well that went. (It was the same story for the older ”enterprise directory,” as well.) But what if we could keep the idea of a central identity, while achieving this “logical” centralization in a more flexible way—one that respects the role of the silos as masters of their domain? A way that evokes the “meta” part of “metadirectory,” while providing a more agile “directory” part?
And that’s where virtualization comes in.
The Radiant Mantra: Manage Globally, Act Locally
Virtualization is the logical part of “logical center” I mentioned above, and it’s how we can create a smart hub that’s based on the cooperation of the periphery. This idea evokes the exact sense of the word “federation”—federate the work, giving each actor its own role to play.
At the center of such a system is a virtualization engine, busy pulling information from your heterogeneous backends and making that data ready to meet the needs of your applications or service providers, whether they’re in-house, on the web, or in the cloud.
The first step in this “active summarization” is to create a reference table of all identifiers, the “golden list” of all your identities out of many disparate data representations. Once the list is complete, your federated identity hub can produce the new—and many times different—views of identity that your applications need. That’s the “manage globally” part of the mantra.
Now when a request comes from any service provider—to check a user’s credentials at login, for instance—that task can also be delegated back to the correct authoritative local store, thanks to this global reference table. After all, we may complain about identity silos, but they’re not just silos, they’re application specialists, designed to do exactly what they do very well. So their “views” are essential to user authentication and need to be preserved—just think about the role of Active Directory in this food chain. This is the “act locally” part of the equation, and taken together, these actions give you an identity infrastructure that’s truly designed for federation.
But let’s take a closer look at how we got to this idea of “manage globally, act locally.”
The Rosetta Stone of ID: Global Integration Backed by Local Ownership
I’ve made a bold claim: that we need to respect both the global and local views, that it’s not enough to merely pull things into the center like the metadirectory, or to simply delegate and proxy back to the different stores like the most commonly sold form of the “classical” virtual directory. We really need a way to do both. We need to synthesize all user data into a reconciled global list of users but built on top of what already exists, and will continue to exist, because it covers some very specific aspects of identity that we need.
If we can agree on this, then the requirement for a dictionary of identifiers—a “Rosetta Stone” that maps a global identity into its specific application representation and makes it easy to look things up—becomes clear. Of course, life would be easier if we were all operating according to a set of predefined “generic” global identifiers, defined at the start of all processes. But the reality is that we’re dealing with different applications adopted at different times from different vendors—each speaking a different language and checking credentials in a different way. They don’t even agree on the basic building block of identity, the identifier!
But don’t feel bad, we’re all in the same boat—even Pyongyang probably has tons of diversity and duplication in its monolithic identity system, with no common global identifiers. So if you cannot impose a common identifier upon each application, then you have to establish this correlation after the fact. That’s what we call a “federated identity system.” And yes, we believe that the accelerated deployment of federation will require such a change at the level of the identity structure. If any sizeable enterprise wants to manage their users and access the cloud, then the structure of an IdP will require a federated identity system (whether it’s hosted in-house or on the cloud only matters in terms of your particular deployment). I believe that such a federated identity system, acting as a consolidated view of your identity, is the exact technical characterization of the function than eluded the so-called “meta”directory.
The Hook-Up on Look-Ups: Remapping Identifiers Inside the Hub
By mapping each identifier, each “global” identity in the Federated ID system has its corresponding translation in a specific silo. So when you look up that individual, you’re able to use the correct identifier of a user for each given application. So my global identifier might be “Michel,” but I am also known as:
Michel = Mike in A = Michael in B = MPrompt in C
That’s the first function of the hub, establishing that there’s an individual called “Michel” who’s common across systems, but also known by other names.
A Global List from Multiple Systems
This correlation table is the start of being able to link all the information about a given individual—and it’s no mean feat to pull all this together. Even assuming you have no data quality issues in your system (and if so, I’d like to hire your data entry team!), what happens when you have name collision?
Why You Need Identity Correlation
Using the social security number or something similar only works if the application supports an additional attribute for identification based on SSN. The only thing you know for certain is that each application will create a unique identifier for each person—or reject it during registration. But unicity in one app does not mean that it’s the same identifier we used across apps. Generally, this is fine because most applications are closed worlds for most functions, except for people (and a few objects like products), whose scope and activity span more than one application. Remember: Identity is the center of your activity because people are always at the center.
Building a Global Virtual Identifier to Enable� a Global Virtual Registry
Thanks for reading and I’ll see you back here for Part 4! Have a great Thanksgiving next week for all my US readers…
A postscript on provisioning: Be sure to check out Paul Moore’s excellent post on the Centrify Express Community blog, Clarifying Cloud Identity. It’s called “Why is User Provisioning So Hard? Doesn’t SCIM Fix It?” and makes some great points about the difficulty of provisioning to many of the big players in the SaaS and cloud realm. He’s right—administrators want everything to happen automatically, but those of us with our hands in the code understand how much effort it takes to make the magic happen.
The post Identity at the Center: Why You Need a Federated Identity Hub appeared first on Radiant Logic, Inc
How a Federated ID Hub Helps You Secure Your Data and Better Serve Your Customers
Welcome back to my series on bringing identity back to IAM. Today we’re going to take a brief look at what we’ve covered so far, then surf the future of our industry, as we move beyond access to the world of relationships, where “identity management” will help us not only secure but also know our users better—and meet their needs with context-driven services.
We began by looking at how the wave of cloud services adoption is leading to a push for federation—using SAML or OpenID Connect as the technology for delivering cloud SSO. But as I stressed in this post, for most medium-to large-enterprises, deploying SAML will require more than just federating access. By federating and delegating the authentication from the cloud provider to the enterprise, your organization must act as an Identity provider (IdP)—and that’s a formidable challenge for many companies dealing with a diverse array of distributed identity stores, from AD and legacy LDAP to SQL and web services.
It’s becoming clear that you must federate your identity layer, as well. Handling all these cloud service authentication requests in a heterogeneous and distributed environment means you’ll have to invest some effort into aggregating identities and rationalizing your identity infrastructure. Now you could always create some point solution for a narrow set of sources, building what our old friend Mark Diodati called an “identity bridge.” But how many how of these ad hoc bridges can you build without a systematic approach to federating your identity? Do you really want to add yet another brittle layer to an already fragmented identity infrastructure, simply for the sake of expediency? Or do you want to seriously rationalize your infrastructure instead, making it more fluid and less fragile? If so, think hub instead of bridge.
Beyond the Identity Bridge: A Federated Identity Hub for SSO and Authorization
This identity hub gives you a federated identity system where identity is normalized—and your existing infrastructure is respected. Such a system offers the efficiency of a “logical center” without the drawbacks of inflexible modeling and centralization that we saw with, say, the metadirectory. In my last post, we looked at how the normalization process requires require some form of identity correlation that can link global IDs to local IDs, tying everything together without having to modify existing identifiers in each source. Such a hub is key for SSO, authorization, and attribute provisioning. But that’s not all the hub gives you—it’s also way to get and stay ahead of the curve, evolving your identity to meet new challenges and opportunities.
The Future’s Built In: The Hub as Application Integration Point and Much More
Another huge advantage of federating your identity? Now that you can tie back the global ID to all those local representations, the hub can act as a key integration point for all your applications. Knowing who’s who across different applications allows you to bring together all the specific aspects of a person that have been collected by those applications. So while it begins as a tool for authentication, the hub can also aggregate attributes about a given person or entity from across applications. So yes, the first win beyond authentication is also in the security space: those rich attributes are key for fine-grained authorization. But security is not our only goal. I would contend that this federated identity system is also your master identity table—yes, read CDI and MDM—which is essential for application integration. And if you follow this track to its logical conclusion, you will move toward the promised land of context-aware applications and semantic representations. I’ve covered this topic extensively, so rather than repeat myself, I will point you to this series of posts I did last spring—think of it as Michel’s Little Red Book on Context…
So the way we see it here at Radiant, the emergence of the hub puts you on the path toward better data management and down the road to the shining Eldorado of semantic integration, where your structured and unstructured data comes together to serve you better. But you don’t have to wait for that great day to realize a return—your investment starts to pay off right away as you secure your devices and cloud services.
Immediate ROI That Ripples Across Your Infrastructure
Final Notes: Storage that Scales and the Pillars of Identity Relationship Management
Of course, to make all this happen, you’ll need a big data-driven storage solution that scales to support all those myriad queries and demands. And that’s why we’re so excited about our upcoming HDAP release.
But with freedom comes a lot of responsibility. If you can correlate information based on identity, what does that mean for privacy and, ultimately, for freedom? Even though we know that technology is neutral, the way it’s used can be anything but, which is why we are joining Kantara in their IRM Pillars Initiative, to be sure that we doing the right things and following best practices and standards when it comes to identity, security, and the Internet of things.
Thanks, once again, for reading through this series—I’m so glad to have a forum where I can take an in-depth look at such topics, along with great readers who come along for the ride, giving me lots of essential feedback and plenty to think about. Please let me know if you have any questions or would like to discuss the future of identity. I love a good-spirited debate!
The post Context is Coming: The Move from IdM to Identity Relationship Management and the Internet of Things appeared first on Radiant Logic, Inc
Access Risk Management Blog | Courion
The plight of the marketer is to distill the essence of a company’s mission in a way that it can be easily understood by virtually anyone. Since joining the IAM technology sector, I have sought a way to describe identity and access management to my mother. My mother is not stupid. She is on the board of three organizations. She hangs out with high-powered people such as politicians, journalists and rock stars like Joan Jett. She possesses the wisdom of a woman approaching 80 years old. But she does not know what IAM stands for, and recently, when I encountered a ‘domestic issue’, I realized that it provides the perfect metaphor to help my mother better understand IAM and access risk.
We at Courion examine data relevant to identities, rights and entitlements, policies, resources and activities. I will map these as they exist in my household:
With our daughter now in college, there are four ‘users’ in our household. It’s me, my wife, our cat and a newly acknowledged mouse. By newly acknowledged, I mean that while we knew we had a co-habitant, we were willing to coexist peacefully with the ‘orphan account’ until recent activities, as described below, heightened our awareness.
Our role definitions are:
- I am the primary bread winner and alpha male. While I do not drink beer, I do enjoy watching football and other sports. I have access rights and entitlements to most everything, but not all things.
- My wife is the property manager and executive management. She has privileged access to all resources and must approve some things for me. For example, I must “ask permission from management” before I blow off yard work to go play golf. That is our segregation of duties to insure that there is no taking advantage of the system.
- The cat thinks he is the alpha male, but is not. He thinks he has privileged access but he does not. In my view, he has rights in excess of his role and he’s always looking for more and is very vocal about it – especially at mealtime.
- The mouse is the rogue entity. Nobody provisioned access for him, although he probably considers us the intruders. We suspect he hacked his way in with an advanced persistent attack.
We have policies that govern our actions. I do the yard work, keep the cars maintained, and loaf around on Sundays. My wife keeps the home, manages finances, and provides executive oversight. The cat is an indoor cat, so he willingly enforces the “don’t go outside” policy himself and is forbidden to go in certain areas of the house. The mouse, unaware of any policies, seems to have the run of everything – the worst orphan account, excessive rights and privileged access case I’ve ever seen.
My wife has system administrator access to all resources at all times. There are resources she chooses to avoid, like power tools and other gasoline powered items. I have system administrator access to some things, but not all. For example, my wife writes the checks. If I want to write a check I have to ask her for one (more SoD).
The cat thinks he can access all resources, but hey, he’s a cat so perception is reality. He roams freely and has multiple spots to crash. But he doesn’t spend money or use power tools, so access risks are low.
The mouse on the other hand, is another story altogether. Unfortunately, we thought the orphan account was harmless, but recent further examination of the mouse’s ‘activity’ illuminated our organization’s resource access problems.
My car is in the shop right now. Apparently, the mouse, given his unchecked elevated access privileges, built a nest under the hood of my car. What’s more, he’d taken cat food from the house and carried it to the car to build the rodent equivalent of a two-story condo with a gourmet kitchen in my engine compartment – the heat of the engine is his microwave. To add insult to injury, he dined on the ignition wiring harness – apparently quite the tasty dish. Who said he had access to that level of cuisine? And, where’s the cat? It’s his responsibility to watch that access and revoke privileges.
It was the twice annual audit (routine car maintenance) and the large fines and penalties (auto repair bill) that highlighted these compliance violations. The car functioned fine last spring. I had no idea what had transpired since my last audit. And, everything seemed normal from the driver’s seat. How did all of this unwind without me knowing about it?
I clearly needed better role definition and access privileges. But, what I really needed was continuous monitoring so I could have stopped the construction of the mouse ‘pad’ before the damage was done. I could have taken remedial action when I spotted it. My inaction is now costing me fines and penalties and the cat’s brand is tarnished beyond repair and his competence as a mouser is in question. In any case, I have to take serious action, now. I am going to remove all access for the mouse/mice with a more drastic move. I have to de-provision all access as quickly as I can.
Ok, by using this metaphor I don’t mean to make light of people’s misfortunes other than my own. Identity and access management and risk mitigation are serious business and can hurt organizations badly. Someone challenged my creativity to see if I could relate this story to IAM issues. What can this tell us?
Periodic reviews to check for compliance are required. But, do they reduce risks? Maybe. In this case, it can reduce the normal car care risks associated with oil change, fuel injector replacements, routine maintenance and the like. It did not, however, reduce the unforeseen risks associated with unauthorized access and excessive rights. Continuous monitoring and frequent access checks would have mitigated the risk and kept things more in line.
How frequently does your organization conduct access certification reviews? Are you examining all access related activity to assess risk? Have you provisioned proper access from the start and do you have automated means to revoke access and privileges quickly?
November 30, 2013
In a recent blog post, Does KBA and Public Sector Online Services Have a Future?, I raised as an issue the inadequacy of KBA for remote identity proofing given the public, and potentially compromised, data sets that are currently used for this purpose. I believe that it is critical for citizen facing public sector services to incorporate continuous identity vetting/verification/proofing as a compensating control. But can that be effectively done when the service is utilizing federated credentials?
The current notion of having layered security controls is often focused on the network, host, and application layers (which are absolutely critical) and less so on having layered controls within the authentication process itself. For citizen and business facing public sector services, I believe that the strong processes outlined in NIST Electronic Authentication Guideline SP-800-63-2 (PDF) should only be one layer in a comprehensive authentication strategy.
But when adding compensating controls to a federation environment, the following questions come to mind:
- What guidance can serve as a starting point?
- What technical controls are recommended?
- Which entity in a federation is ideally suitable (or capable) of implementing specific controls?
I have found the Federal Financial Institutions Examination Council (FFIEC) authentication guidance (PDF) a good resource on this topic. It identifies the following Technical Controls:
- Out-of-band identity verification (via a separate channel) to pass through gates related to account maintenance activities (e.g. password reset) performed by customers either online or through customer service channels
- Device Fingerprinting (including device configuration, IP address, geolocation) with the initial binding of the fingerprint to a user done by leveraging an out-of-band identity verification mechanism
- Internet protocol (IP) reputation-based tools to block connection to servers from IP addresses known or suspected to be associated with fraudulent activities
- "Out of Wallet" questions that do not rely on public information (i.e. the entity has a close relationship with the person and can leverage internal data for this purpose) for authorizing higher risk transactions
- Anomaly detection that looks at velocity of transactions as well as customer history and behavior
In the above table, I've also taken an initial cut at mapping the controls to the entities able to implement them (based on policy) in a federation environment.
The answer to the question that I've asked as the title of the blog post is "YES". It does require clear thinking on roles, responsibilities and capabilities, but in order to effectively deliver public sector online services, we need to move away from the waterfall approach to identity proofing that is in place to one that is more agile and responsive to the constantly morphing threats.
These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer
November 29, 2013
Hosting an IDP is hard, so its natural that organizations will look to the cloud to satisfy the requirement. Based on storage of the private key, we can break down the solutions into three broad categories:
1: Dedicated Server : HSM
For these customers, the integrity of the signing is extremely critical. Therefore, they want to maintain a dedicated server on their network, and attach an HSM (http://en.wikipedia.org/wiki/Hardware_security_module). The HSM helps ensure that the private key cannot be exported. An HSM is normally used for important root keys, like Verisign, or federations like InCommon. The Gluu Server can be used in conjunction with an HSM to satisfy this requirement.
2: Dedicated Server
For these customers, the private key is stored on the file system of a dedicated server to which the customer has root access. The opportunity for the key to be compromised is greater, but the company controls the server firewall, can run intrusion detection, threat analysis software, and in the case of a breach, can access system logs to perform a thorough forensic analysis of the breach.
3: Shared Server
With a shared server, the IDP for many customers is hosted on one physical server. Therefore, the hosting provider is responsible for managing the private keys on behalf of its customers. In the event of a breach, the customer cannot have root access on the IDP because this might give them access to the data of other multi-tenant customers, or to internal systems of the hosting provider. There are several shared Server platforms: Okta, OneLogin, SaleForce, PingOne, Bitium, StormPath (just to name a few). Gluu decided not to enter this crowded market. If the customer has a small budget, than this solution may make sense. It costs around $150/month to dedicate a server to be your domain IDP, so if you only have 10 employees, you’d probably rather pay $5/month per user on a multi-tenant system. Also, its implicit here that such a small organization would not care as much about preserving the integrity of they key, or performing a detailed forensic analysis in the event of a breach.
November 28, 2013
Gosh, is it that time again?
William S. Burroughs - A Thanksgiving Prayer
John Dillinger died for somebody's sins but not mine.
[from: Google+ Posts
I heard of the Movember movement last year when my friend Pat aka @Metadaddy filled my twitter stream with his moustachy face. I quickly noticed other people growing a moustache, including the Grenoble hockey team : Les Bruleurs de Loups.
So this year, when Andrew Forrest suggested that we create a ForgeRock team for Movember, I didn’t hesitate much, joined, and recruited other coworkers for the French team. I told my wife beforehand and she was not really enthusiastic about the idea of a moustache on my face. But my middle daughter was encouraging me to participate and help improve men’s health research and awareness.
We’re reaching the end of Movember, and the moustache has grown. My wife hates it… So help me proving her that it was worth suffering and make a donation to our team.
Filed under: General
November 27, 2013
These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer
November 26, 2013
As we move to close 2013 we’re already excited by all of the opportunities for 2014. Planning is underway on some exciting initiatives and great events for 2014! A taste of the details are below.
The following events are confirmed for 2014
- NSTIC Pilots in Motion – Jan 30: This is a special event produced by Kantara and hosted at the Department of Commerce in Washington DC. The event is an industry day that will feature NSTIC pilots where Kantara is playing a role. There may be more pilots added to the day as time and space allows. This event is complementary to similar pilot days being held at IDESG in the weeks prior. Our event is positioned to bring a different audience for complementary review and industry input. Event space is limited and we’ll have more details on how you can attend soon. If you’re interested to join us please send an inquiry to firstname.lastname@example.org.
- HIMSS – Feb 23-27: We’re planning an amazing workshop for HIMSS 2014. The is the second year in a row Kantara will be hosting a workshop at HIMSS. This conference go’s big and we love the theme: Innovation, Impact, Outcomes, Onward! We’ll have more details about our workshop posted soon and we hope you’ll join us for this open industry event.
- RSA – Feb 24-28: RSA is always an amazing event. We have Kantara Members appearing on conference agenda topics and we’re very happy to bring the next installment of “Non-Profits on the Loose”. Non-Profits on the Loose is a great opportunity for our non-profit friends and family to gather. Last year’s event was a packed house (ok packed art gallery). We all have a great opportunity at this event to break bread with other non-profits, industry leader stakeholders, and government representatives from around the world. Check out last year’s event.
- EIC – May 12-16: The European Identity and Cloud Conference is an event we look forward to every year. We’ll be hosting another workshop at EIC in 2014. Our workshops “are asked for by name” at EIC. We’ll be presenting around IRM, UMA, Profiling of OpenID Connect & OAuth, and much more. Anyone in the Kantara community attending should register with our discount codes. Kantara Community Code (10% discount) = “kantara”. Kantara Members can register with our 20% off code = contact email@example.com for details.
There’s much more to come from Kantara and we encourage you to join us at this exciting time in Identity, Trust and Privacy innovation.
Gluu has both a social and a business mission. These missions need not be at odds. In fact they are symbiotic. The business vision of Gluu is quite simple: offer a utility service to help organizations control access to valuable online resources. Our social mission is to make the Internet a safer place for people and businesses by writing great open source software.
When I started Gluu in 2009, I felt access management tools were too expensive for many organizations. There are millions of domains on the Internet. Access management software like Siteminder had a small impact because only the Fortune 500 could afford it. Open source software was piece of the solution. The other piece was to provide a cost effective mechanism to enable organizations to support the open source software, so they could build and operate a mission critical IT service.
Utilities provide the economies of scale to drive down the cost of technology, making it available to a wider audience. At the dawn of the electric era, only the largest companies could afford electricity. They built power plants on rivers. It wasn’t until the advent of electric utilities that drove down the price of electricity that small businesses and ordinary people could use it. Big businesses also benefited–they no longer had to build and maintain their own power plants.
Gluu’s utility access management service funds our social mission. To make the Internet a safer place, we need to make security software more available to developers and system administrators. The infrastructure we are building today will provide a coral reef on which a diverse ecosystem of new Internet services can thrive.
Networks have an ebb and flow of centralization and decentralization. At first, a technology is introduced by an innovative company. For-profit companies inevitably are quicker to invent products to address market needs. Over time, standards emerge, and the networks decentralize. We saw it with Compuserve and Email, AOL and the Web. With better identity standards, Google and Facebook may get their comeuppance. In the 90’s, it would have been crazy to imagine every organization launching their own AOL. And yet, that is exactly what the Web made possible. And the result of this was a network richer and more diverse than any media executive at AOL ever imagined. Furthermore, the Web was hacked to achieve purposes never envisioned by AOL executives.
Internet identity is at a similar stage as the Web in 1995. Right now we use services like Dropbox, Google and other centralized services to share files and data. These services rely on us having accounts in the central node. I can only share a doc on Google with you if you have a Google account. Or worse… services rely on security by obscurity (which is not really security at all). With standards and open source software, each domain could build their own data federation services like Google, or perhaps new services that no Google engineer has even imagined.
This trans-formative vision for a decentralized and safer Internet motivates our team at Gluu. And hopefully we’ll also make some money. If we can do a little of both, or a lot of both, we’ll be satisfied that struggle was worth it.
With OpenDJ 2.6.0, we’ve introduced a new way to access your directory data, using HTTP, REST and JSon. The REST to LDAP service, available either embedded in the OpenDJ server or as a standalone web application, is designed to facilitate the work of application developers. And to demonstrate the interest and the ease of use of that service, we’ve built a sample application for Android : the OpenDJ Contact Manager
The OpenDJ Contact Manager is an open source Android application that was built by Violette, one of the ForgeRock engineer working in the OpenDJ team. You can get the source code from the SVN repository : https://svn.forgerock.org/commons/mobile/contact-manager/trunk. Mark wrote some quite complete documentation for the project, with details on how to get and build the application. He published it at http://commons.forgerock.org/mobile/contact-manager/.
The whole application is just about 4000 lines of code, and most of it is dealing with the display itself. But you can find code that deals with asynchronous calls to the OpenDJ rest interface, with paging through results, and parsing the resulting JSON stream to populate the Contacts, including photos. Et voila :
The application is just a sample but it clearly is usable in its current form and will allow once a contact was retrieved from the OpenDJ directory, to add it to the Contacts standard application, call the person, locate its address on maps, send the person an email, navigate through the management chain…
In future versions, we are planning to add support for OAuth 2.0, removing the need to store credentials in the application settings.
As it’s open source, feel free to play with it, hack and contribute back your changes.
Filed under: Directory Services
In Martin Kuppinger
It has been somewhat quiet around IBM’s IAM offering for the past few years. Having been one of the first large vendors entering that market, other vendors had overhauled IBM, being more innovative and setting the pace in this still emerging market.
This seems to be over now and IBM is showing up amongst the IAM leaders again. Since IBM launched its IBM Security division as part of their software business and moved the IAM product from the Tivoli division into that new division, things have changed. The IBM Security division not only is responsible for the IAM products, but a number of other offerings such as the QRadar products.
IBM has defined an IAM strategy that brings together their capabilities in Security Intelligence – such as the IBM X-Force services and the QRadar products – with IAM. The core of IAM still is formed by familiar products (if you replace “Tivoli” with “Security”), such as the IBM Security Access Manager, the IBM Security Directory Integrator, the IBM Security Identity Manager, and others. However, IBM has put a lot of work in these products to improve them and to make them leading-edge (again, in some cases).
There have been four recent announcements. One is the IBM Security Access Manager for Mobile, an appliance that allows managing mobile access, provides SSO services and risk- and context-aware access, based on information such as the IP reputation – that is where, for instance, IBM X-Force comes into play.
IBM has also introduced their own Privilege Management solution, IBM Security Privileged Identity Manager, to manage shared accounts and add strong authentication. The interesting piece there is the tight integration with QRadar to analyze real-time activity of privileged identity use.
The third major announcement is what IBM calls the IBM Security Directory Server and Integrator. Here they bring together Directory Services and Identity Federation – plus QRadar integration. Integrating federation and directory services allows managing more identities, such as external users, as well as reaching out to Cloud services.
Finally, IBM has extended their IBM Security Identity Manager – the former Tivoli Identity Manager – and added advanced analytical capabilities as well as integration with QRadar security intelligence. The latter allows for better analysis of real-time attacks and fraud detection. While such integration is not entirely new, if you look for instance at NetIQ Identity Manager and Sentinel integration, it highlights the fact that IBM is moving forward with its IAM offerings rather quickly now, showing innovation in various areas and having a clear execution strategy.
I always appreciate strong competitors in a market – it helps drive innovation, which is good for the customers. The IBM investment in IAM is also a good indicator of the relevance of the market segment itself – IAM is one of the key elements for Information Security. IBM’s strategy also aligns well with my view, that IAM is just one part of what you need for Information Security. Integration beyond the core IAM capabilities is needed. So, in light of IBM’s current news around IAM, I think it is worth having a closer look at them again.
November 25, 2013
IdentityNext is a unique conference that pulls aspects from several of the identity events I’ve attended over the years. As only a handful of Americans attend, it reminded me of Kuppinger’s EIC (European Identity Conference). There were delegates from many Western European counties, for example Sweden, Denmark, France, Germany, Austria, Spain, Belgium, the Netherlands (of course), England and probably a few more. The focus on privacy reminded me of the PII (Privacy, Identity, Innovation) which is held several times around the US. And finally, it was the second conference I attended this year that had an “un-conference” portion, inspired by IIW (Internet Identity Workshop).
It was a great honor for me to deliver the opening keynote. I wanted to give a general interest talk about federations, an introduction to OAuth2, and describe how these two technologies could be combined to the net benefit of society. I was a little tense, especially as I’d never attended this conference. My slides are here. I was amused that Martin Wegdam quoted me on Twitter as apologizing for previous XML identity standards. I was not really serious… As Andre Durand says, “Identity” is a big and complex domain of knowledge. If we (as in the global community of identity architects) had figured “it” out on the first try, it would have been a miracle. Defining standards for identity has been an iterative process. And 13 years later, I think the work done on OpenID Connect puts us on the verge of a good technical standard for one aspect of Identity–authentication. “Connect” has achieved something even more elusive: consensus.
One of the best talks was given by author, journalist and teacher Pernilla Tranberg. She presented an up-to-date view of the current state of online privacy, and some pragmatic strategies we can consider to achieve more control of our personal data. For example, don’t use Google search… use “Start Page”, which strips out all the tracking cookies that sell to advertisers the interested implied by your Internet searches. Also, advise your kids to sign up for Facebook using a different name so they can start their adult life with a clean slate.
One of the most amusing talks was given by Mike Chung from KPMG on the topic of predications. He recommended a number of books: Nate Silver’s The Signal and the Noise, two books by Nassim Nicholas Taleb: The Black Swan and Fooled by Randomness. Dan Ariely’s book Predictably Irrational. Robert Kaplan’s Revenge of Geography and Daron Acemoglu’s Why Nations Fail. Robert McNamaras In Retrospect and Jim Paul’s What I Learned Losing a Million Dollars. Apparently none of which helped him very much given his self-proclaimed abysmal record making accurate forecasts in identity and access management. For example, he forecast in the mid 2000’s that WS-* would be the predominant federation protocol among other equally inaccurate claims. He totally missed the rise of mobile computing. And even more amazingly, companies paid him his inaccurate advice. Hearing stuff like this makes me nervous about the big bets Gluu has placed on OAuth2, and reminded me that if Gluu is able to invest our scarce resources properly in one of the most dynamic technical markets, we’re probably more lucky than smart.
Most Americans are unaware of the identity card programs that have been undertaken by almost all European governments. The conference featured talks on the efforts of Sweden, Germany, and Belgium. All of these cards can be used to access government services. But many are expanding to B2B and B2C purposes. For example, in Belgium there are beer vending machines that read the birthday off of your national id cards to figure out if you’re old enough to be served. In Japan I video-taped a machine that automatically poured a glass of beer. Its clear… our country is just so far behind, it’s ridiculous.
Given my keen interest for federation, the talk I got the most out of was Rainer Horbe’s ’s talk on federation. Austrians clearly understand the value of federations, and also that these federations are hard to form. So the Austrian Chamber of Commerce formed the Wirtschaftsportalverbund (which believe it or not is an abbreviation for something like the Austrian Identity Federation Authority) which aims to establish B2B and B2C federations the cost of identity management and SSO. This group is creating a framework to help businesses jumpstart federations, including the required technical and governance components.
One of the most interesting conversations I had at the conference was with Haydar Cimen from KPN and Steve Pannifer from Hyperion Consulting regarding Snowden. While a majority of Americans now regard him as a heroic whistle blower, his support in Europe is even higher. In fact, I seem to be the only one in my industry who thinks he needs to answer for his actions. My problem is that if more people follow his precedent, our government and businesses couldn’t operate. If he thinks the moral imperative to uncover this wrong was sufficient to justify his actions, he shouldn’t be hiding in Russia. If he had stayed in the US, I’d support him for standing up for his beliefs. Many people don’t think he would have gotten a fair trial if he had stayed. Or that maybe the government would have water-boarded him, or left him in solitary for years like they did to Manning. Whatever you think of Snowden, it’s clear that our allies view the US as little better than China, are hesitant to travel to the US for fear of being the victim of a big-data analysis snafu, and are resentful that their systems are being hacked in the pursuit of America’s enemies in a covert cyber war for which we apparently have a great talent (and an insane amount of budget).
I was happy to see many old friends, especially from Surfnet and Kinnesnet. I also got a chance to chat with Hans Zandbelt from Ping Identity. Apparently after working all day on helping companies implement federation, he can’t get enough, so he has been moonlighting to write his own OpenID Connect plugin for Apache. It’s much simpler than the one Gluu has undertaken in our crowd-sourcing project. The nice thing about it is that it is standalone. Gluu uses a local process, “oxd”, to handle the OAuth2 messaging. Some people don’t want this additional complexity. We used this approach because it enabled us to leverage our Java libraries for OpenID Connect and UMA, and it would have taken us too long to do all the messaging in C (as we already have Java libraries written). Hans’ plugin supports less features, but its a great example of how you can use a subset of the features if it suits your purpose. More options for developers is great, so I hope Hans has the energy to keep working on it, and to make it available to other developers. If you want to look at the code, its currently here.
Finally, one of the best uses of technology on display in a video from the UK by hipster the “Urban Wizard.” To express his identity he likes to dress up like a wizard when he walks around London. He melted his Oyster card (subway debit card), and attached the chip to his staff. As he walks into the subway, he touches his staff to the turnstiles, and magically, the doors swing open. Apparently the police were not amused, and won’t let him do this anymore. But it’s a reminder that technology is not a one-size fits all affair. People will use things in ways the developers never intended. Who knows what OX will be used for one day… open source and open standards are more embracing of this phenomenon than the metro police
Managing and governing access to systems and information, both on-premise and in the cloud, needs to be well architected to embrace and extend existing building blocks and help organizations moving forward towards a more flexible, future-proof IT infrastructure. Join KuppingerCole APAC in this Breakfast Debate to find out how to best move from old school, prohibition based security to trust in access control.more
Access Risk Management Blog | Courion
We all know that deploying an enterprise IAM solution is a journey that entails making many different and often times critical strategic decisions. This journey is often described as long, stressful and tedious, but should it be?
Choosing the right vendor can alleviate many of the hurdles that organizations face in an IAM deployment project and can result in a faster, more successful implementation. This blog is intended to help customers in their evaluation process of IAM vendors, by addressing some important aspects that need to be considered and by raising some questions that need to be answered:
Purpose: Clearly defining the purpose of the project helps consolidate and clarify expectations for the IAM project. This is often not given as much attention as needed. As a result, expectations are not clearly stated and hence organizations struggle to make the right choice in selecting a vendor. Ask yourself this question—What is the primary goal of this project? Some examples are:
- Is it the help desk call volume that you are trying to reduce
- Is it the end user experience that you are trying to improve
- Is it the auditors you are trying to answer
- Is it the overall risk posture that you are trying to secure
Sometimes, it could be one or more of these. If that is the case, then prioritize the goals. Understanding what it is that you are trying to accomplish and clearly stating the goals and priorities will go a long way in your evaluation process.
Impact: Organizations can be shortsighted when it comes to understanding the impact a project such as this may have across the organization. This goes beyond those obviously impacted, such as the end users who will use the solution and the administrators who will manage the solution:
- Is the solution easy to use and will end-users be able to use the solution readily?
- Is the solution easy to administer?
- Does the solution need programming skills sets to deploy and maintain?
- How many people are typically involved in administering the solution?
- Does it only help reduce the workload of people performing provisioning/de-provisioning actions?
- Involvement from target system owners, help desk administrators, HR and marketing:
- How does the solution integrate with the target systems?
- How much of the target system users’ time is needed to support the deployment?
- Does the solution help reduce help desk call volume?
- How is HR information leveraged and how much of the HR department’s time is needed to support the integration?
- Is marketing needed to promote the adoption of the solution? If so what tactics are planned to expedite adoption?
Many organizations do a good job determining which target systems need to be part of the IAM project. For those that struggle with this decision, a good place to start would be to consider:
- High volume applications, for which most of the requests come through.
- Target systems, for which the provisioning teams spend the most time on.
- High-risk applications, for which you lose sleep because you fear that your organization may be at risk if the system was to be compromised.
- The applications that need to be part of the overall solution. Is the solution capable of integrating with these systems easily during any stage of the deployment process?
Processes: More often than not, organizations tend to think that their situation is entirely unique and that a custom solution is absolutely essential to address every detail in every process they currently have. While it is true that no two organizations have exactly the same set of requirements, the need to address every single detail quite often proves to be detrimental to the pace of the project. The result is that the project drags on for a long time. By the time the project gets “on its feet”, the requirements may have changed again.
The point here is to go back to the drawing board and map out the goals and requirements in priority order. By doing so, organizations may realize that with the right IAM solution in place, there may be simpler and more efficient approaches to addressing problems that they had previously been tackling through cumbersome processes or work-arounds due to a lack of tools or available information. Organizations should consider the IAM solution as not just a solution that automates processes but also as a solution that provides the ability to improve existing processes where possible.
Technology: Based on everything discussed so far, you may already be thinking about the technology that can support all of this. After all, everything that needs to be accomplished in an IAM project is driven by the underlying technology. Therefore, choosing a solution that can address both immediate and future needs is strategically important. Some of the questions that might help in determining the right technology are:
- Can the product satisfy all the requirements determined thus far, such as
- Ease of use
- Ease of administration
- Ease of target integration
- Addresses various processes – both immediate and future needs
- Robustness—is the solution enterprise grade?
- What are the components of the overall IAM offering?
- Are each of those components built on the same platform, or are they cobbled together through acquisitions?
- What is the future road map for the product?
Ability to implement: Organizations dedicate a considerable amount of attention to choosing the “right technology”, but frequently undervalue the ability of the vendor to implement the solution. This often leads to an unbearably long IAM project or sometimes even a failed project. Key reasons for failed IAM projects include an inability to understand the magnitude of an enterprise IAM project, underestimating the complexities involved, not aligning expectations correctly, a failure to properly scope a stage-by-stage approach to achieve the ultimate goal and an inability to map out a proper path to success. The importance of choosing a vendor with the experience to clearly address each of these factors and consequently drive an enterprise wide IAM project to success cannot be overstated!
Conclusion: The factors addressed here and questions raised are not meant to be an exhaustive list of all that needs to be evaluated when choosing an IAM vendor, but are intended to highlight some of the key aspects you need to consider to make an informed decision to pick the right vendor and better ensure your success.
November 24, 2013
I've written before about Multi-Sided Platforms and how it provides a model for looking at identity federation. Given that public sector organizations across the world are starting to deploy such platforms (brokers, exchanges, hubs etc), this blog post looks at some potential capabilities that could be enabled by such platforms.
I won't belabor the benefits of minimizing integration pains, enabling protocol mediation and privacy respecting capabilities. They are all critical, but well understood, aspects of why the public sector is going with the Platform-in-the-Middle approach to leveraging non-public sector identity services.
In this post, I would like to focus on two things that have great impact on the adoption of identity federation in the public sector; Culture and Contracts.
Public sector agencies vary in their understanding of identity, and their mission and the nature of their relationship with the citizen often drive that understanding. You will often find a deep and nuanced view of identity, risk and fraud in agencies that maintain a citizen's vital records. As I have noted before, these are the agencies that often have a role in identity establishment.
These agencies believe (and rightfully so) that their internal capabilities for identity proofing are more trustworthy than anything found in the private sector. I personally don't believe in force-feeding these agencies CSPs or Identity Managers. But they may very well find Token Managers, offered via a Platform-in-the-Middle to be an attractive option. It allows them to control the identity proofing and the secure binding to the token. The reverse case is an Agency that has a mature Token Management infrastructure and wants to leverage external Identity Managers.
On the contracting side, using the Platform as the point of demand aggregation to drive pricing negotiations for the services, improves choices while enabling a flexible pricing model based on needed capabilities.
In short, the key to using the platform for identity federation adoption is to have options available, buffet-style, tailored to Agency needs and culture and not a one size fits all solution.
These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer
[from: Google+ Posts
November 23, 2013
Fine-Grained Permissions with Stormpath
Many customers have asked us how to do fine-grained permissions for users managed in Stormpath. Good news! With
customData, you now have tons of flexibility in modeling user permissions.
Stormpath was intentionally designed with a fairly generic model for role-based access control. Because our customers use the API for everything from simple website login to multi-tenant SaaS applications, deep application-specific permissioning requires some custom data modeling. That’s where the
customData resource comes into play.
customData resource allows you to store up to 10MB of JSON-formatted information about a user or a group. (docs here) For instance,
customData can store profile info for Jean-Luc Picard:
curl -X POST --user $YOUR_API_KEY_ID:$YOUR_API_KEY_SECRET \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
"username" : "jlpicard",
"email" : "firstname.lastname@example.org",
"givenName" : "Jean-Luc",
"surname" : "Picard",
"birthPlace": "La Barre, France",
"favoriteDrink": "Earl Grey tea",
Permissions using the
Another way to use the
customData object is to assign special permissions either at the user or group level. You could give Jean-Luc fine-grained permissions by declaring a set of permissions attributes in your application and then assigning them to the fine captain.
“crew_quarters”: “ 9-3601”,
In this example: Picard has two permission types that would apply to all crewmembers of the Enterprise:
lock_override. These could define what he can do in a certain area of the shop, and what parts of the Enterprise he has access to. Similarly, Stormpath
customData can help define what users can do in a part of your application, or restrict access to certain areas.
Picard also has a permission block that is unique to him:
command_bridge. This highlights an important feature about the
customData object is schema-less, and its name-value pairs do not need to have the same structure for all accounts or groups.
command_bridge allows Picard to lockout command functions of his own ship, NCC-1701-D, and restrict command to the Bridge area of the ship. Only a Captain would have this permission, presumably to command the ship as it goes down or to lock out control from a hostile boarding party, so no other users in the directory would need this name-value pair.
Permissions with Shared Secrets
command_bridge permission includes a nested value,
control_key,that holds a shared secret that authenticates the
command_bridge permission. In order to commandeer the Enterprise, he needs to enter that value in his own voice from the Bridge.
We can also see how a coding error lost Picard control of the Enterprise – he failed to encrypt his secret token,
control_key. Before passing a shared secret to Stormpath, make sure to encrypt it using a strong encryption cipher, such as AES-256-CBC with a secure random Initialization Vector.
Otherwise, you might as well just tell the Android your shared secret.
Permissions Vs. Roles
When modeling a complex user infrastructure, such as a ship or a multi-tenant SaaS, one of the biggest challenges is modeling out permissions, groups and roles. So first, lets define what these things are:
- A group is a collection of users. Users may be members of many groups.
- A role is a way to organize rights.
- Permissions are a type of right. Permissions could be grouped into a role and assigned to a group. Or you can assign permissions to users on an ad-hoc basis.
A permission, such as
command_bridge, is a statement of raw functionality: actions, and behaviors in an application and nothing more. Permissions explicitly define only "what" the application can do, but not “who” can do those things. They have three important components.
- What type of resource is being interacted with (“type”: “vessel:bridge”)
- Which specific resource is being interacted with (“identifier”: “NCC-1701-D”)
- What action or behavior is being performed (“action”: “lockout”)
If a group of permissions doesn’t address these things specifically, it would likely be better expressed as a role.
Setting Permissions at the Group Level
You could also set permissions for crewmembers at the group level, based on organizational groupings like rank and function. Remember, users can be associated with multiple groups in Stormpath. This would mean that any user assigned a group would inherit that group’s permissions.
First, create a series of groups associated with different sets of permissions. Then give each group a
customData resource that defined those permissions. Picard, as a member of the “Command Division” and “Senior Officers” groups, would have different permissions than Guinan, who would be a member of totally different groups, such as “Operations Division” and “TenForward staff” groups. Setting coarse-grained permissions at this level reduces complexity and development time.
Stormpath is a User Management API that reduces development time with instant-on, scalable user infrastructure. Stormpath’s intuitive API and expert support make it easy for developers to authenticate, manage, and secure users and roles in any application.
And, because I love all things TNG, the most epic password breach... ever.
November 22, 2013
Last Monday and Tuesday (Nov 18-19), I was in Paris attending the 4th International LDAP Conference, an event I help to organize with LDAPGTF, a network of French actors in the LDAP and Identity space. ForgeRock was also one of the 3 gold sponsors of the conference along with Symas and Linagora.
The conference happens every other year and is usually organized by volunteers from the community. This year, the French guys were the most motivated, especially Clément Oudot from Linagora, leader of the LDAP Tool Box and lemonLDAP projects, and Emmanuel Lecharny one of the most active developers on Apache Directory Server.
I was honored to be the keynote and first speaker of the conference and presented “The Shift to Identity Relationship Management“, which was well received and raised a lot of interest from the audience.
The first day was focusing more on the users of LDAP and directory services technologies, and several presentations were made about REST interfaces to directory services, including the standard in progress: SCIM.
Kirian Ayyagari, from the Apache Directory project, presented his work on SCIM and the eSCIMo project. Present for the first time at LDAPCon, Microsoft’s Philippe Beraud spoke about Windows Azure Active Directory and its Graph API. And I talked about and demoed the REST to LDAP service that we’ve built in OpenDJ. For the demo, I used PostMan, a test client for HTTP and APIs, but also our newly open sourced sample application for Android : OpenDJ contact manager. In the afternoon, Peter Gietz talked about the work he did around SPML and SCIM leveraging OpenLDAP access log.
After many talks about REST, we had a series of talk around RBAC. Shawn McKinney presented the Fortress open source IAM project and more specifically the new work being done around RBAC. Then Peter, Shawn and Markus Widmer talked about the effort to build a common LDAP schema for RBAC. And Matthew Hardin talked about the OpenLDAP RBAC overlay bringing policy decisions within the directory when deploying Fortress.
Then followed presentations about local directory proxy services for security based on OpenLDAP, about Red Hat FreeIPA (another first appearance at LDAPCon) and about OpenLDAP configuration management with Apache Directory Studio. Also Stefan Fabel came all the way from Hawaii ( Aloha ! ) to present a directory based application for managing and reporting publications by a university: an interesting story about building directory schema and data model.
The day ended with a presentation from Clement Oudot about OpenLDAP and the password policy overlay. As usual, talking about the LDAP password policy internet-draft raises the question of when it will be finally published as an RFC. While there is a consensus that it’s important to have a standard reference document for it, I’m failing to see how we can dedicate resources to achieve that goal. Let’s see if someone will stand up and take the leadership on that project.
After such a long day of talks and discussion, most of the attendees converged to a nearby pub where we enjoyed beers and food while winding down the day through endless discussions.
The second day of LDAPCon 2013 was more focused on developers and the development of directory services. It was a mix of status and presentations of open source directory projects like OpenDJ, OpenLDAP or LSC, some discussions about backend services, performance design considerations and benchmarks, a talk about Spring LDAP… As usual, we had a little bit of a musical introduction to Howard Chu‘s presentation.
I enjoyed the Benchmark presentation by Jillian Kozyra, which was lively, rational and outlining the major difference between open source based products and closed source ones (although all closed source products were anonymized due to license restrictions). It’s worth noting that Jillian is pretty new in the directory space and she seems to have tried to be as fair as possible with her tests, but she did say that the best documented product and the easiest one to install and deploy is OpenDJ. Yeah !!!
Another interesting talk was Christian Hollstein‘s about his “Distributed Virtual Transaction Directory Server“, a telco grade project he’s working on to serve the needs of the 4G network services (such as HSS, HLR…). It’s clear to me that telco operators and network equipment providers are now all converging to LDAP technologies for the network and this drives a lot of requirements on the products (something I knew since we started the OpenDS project at Sun, kept in mind while developing OpenDJ, even though right now our focus has mainly been on the large enterprises and consumer facing directory services).
All the slides of the conference have been made available online through the LDAPCon.org website and the Lanyrd event page. Audio has also been recorded and will be made available once processed. And as usual, all the photos that I took during the conference are publicly available in my Flickr LDAPCon 2013 Set. Feel free to copy for personal use.
It’s been a great edition of the LDAPCon and I’m looking forward to the next one, in 2 years !
Meanwhile I’d like to thanks the sponsors, all 75 attendees, the 19th speakers and the 2 organizers I had not mentioned yet : M.C. Jonathan Clarke and Benoit Mortier.
Filed under: Directory Services
In Rob Newby
I’ve worked in Security for many years, specialising in Network and Data Security, largely by chance, following my interests and the market in equal measure. I started with authentication tokens and SSL acceleration devices back in the early 2000s, the latter market mutated into key and certificate management, encryption of various types hanging off these monolithic management devices. Some of the SSL accelerators turned into load balancers and proxies, even SSL VPNs. It was a technology that spawned a number of others. In 2009, I prophesised that encryption was finally going to make a difference. I knew that data security was important, but I could not have predicted exactly how important that statement would be. I was roundly criticised for my stance at the time, and possibly this was right considering the timescales, but I now feel a little vindicated, if not prescient.
Of course now Cloud is becoming prevalent, encryption is more important than ever to protect customer data in transit and storage away from its source. Key and Certificate Management is coming to a point where it is usable and necessary, and the threats are very clear. As the adoption of Cloud technologies increases exponentially, consumers are finding there is a greater requirement for encryption and key management technology. Their data is no longer in their control for processing and storage, international agencies are spying on “everyone” it seems, and breaches are happening on a regular basis. Google’s Eric Schmidt recently advised that the way to end state sponsored spying was to “encrypt everything”.
I am currently writing a Leadership Compass to compare the vendors in this field, and have just completed an Advisory Note which explains what to look for in an Enterprise Key and Certificate Management (EKCM) solution, not to mention why you should be looking for one in the first place. EKCM can be used for a multitude of encryption and authentication tasks, certificates for email, SSL, keys for tape, database and laptop to name but a few. An investment in EKCM now seems to be a sensible choice, as Cloud is here to stay. As EKCM increases in scope, I can’t help think that businesses will find there are limitations in the reach of current technologies, and will look for ways to extend this to their end-user clients at greater scale without losing them control of their security environments. Imagine for a moment a global Telecoms company that could manage keys or certificates for all of its users, authenticate or encrypt for them on demand, to any other user or business in the world. It would take a lot of co-operation, and a lot of infrastructure, but the technology and ability to do this exists today, it’s just a matter of putting it together. Maybe I’m getting ahead of myself again…
The traditional corporate perimeter is starting to disappear as Cloud adoption increases, enabling a yet more dispersed workforce and client-base. We are already discussing new perimeters around information, requiring classification and asset tagging. We are seeing the rise of technologies that focus on tagging data to protect itself, so-called “Smart Data”, and creating virtual environments/perimeters that data cannot move outside. The next issue is how to keep this data protected once it leaves the corporate/controlled environment and spills out into the Internet.
The rise of Big Data continues to create its own security solutions and issues of course. As more and more Big Data solutions are created to process data at scale, the metadata being produced is of more value than the original data store. This data needs to be protected at source. I am beginning to see security solutions which rely on processing of logs on global scales, which will need to be implemented similarly to the key management technologies above. This will create further concerns about where this processed data is being stored and who has visibility, not just the service providers, but national and international intelligence agencies.
This is the direction that business is moving in however, and as security professionals we have to deal with the issues this creates. We are already seeing Cloud adoption accelerating, boundaries disappearing; huge amounts of data are being created, shared and managed over vast distances. Businesses can no longer rely on their data being hidden away in datacentres, as the edges of those datacentres are now porous, geographically dispersed and constantly shared. Effectively, the Internet is becoming a giant data store, the only way to differentiate the sensitivity of data is by encrypting or not. On the other side of this, there are very real opportunities here for communications and technology companies working on large scales to create a more “private” Internet over existing infrastructure, effectively making the world their datacentre and applying the required protection where it is needed, with the data, not with the walls.
Aussagen von Auditoren zu Risiken durch privilegierte Nutzer sind nicht wirklich nötig, um ein besonderes Augenmerk auf privilegierte Zugriffe zu werfen.more
In Martin Kuppinger
During the last few months, we have seen – especially here in Europe – a massive increase in demand for methods to securely share information, beyond the Enterprise. The challenge is not new. I have blogged about this several times, for instance here and here.
While there have been offerings for Information Rights Management or Enterprise Rights Management for many years – from vendors such as Microsoft, Adobe, Documentum or Oracle, plus some smaller players such as Seclore – we are seeing a lot of action on that front these days.
The most important one clearly is the general availability of Microsoft Azure RMS (Rights Management Services), with some new whitepapers available. I have blogged about this offering before, and this clearly is a game changer for the entire market not only of rights management, but the underlying challenge of Secure Information Sharing. Microsoft also has built an ecosystem of partners that provide additional capabilities, including vendors such as Watchful Software or Secude, the latter with a deep SAP integration to protect documents that are exported from SAP. And these are just two in a remarkably long list of partners that help Microsoft in making Azure RMS ready for the heterogeneous IT environments customers have today.
Aside of the Microsoft Azure RMS ecosystem, some other players are pushing solutions into the market that can work rather independently, somewhat more the way Seclore does. Two vendors to mention here are Nextlabs and Covertix. These are interesting options, especially (but not only) when there is a need for rapid, tactical solutions.
Other vendors that are worth a look in this market for Secure Information Sharing include Brainloop and Grau Data. Both are German vendors, but there are other solutions available in other countries and regions. These focus primarily on providing a space to exchange data, while the others mentioned above focus more on data flowing rather freely, by protecting these documents and their use “in motion” and “in use”.
The current momentum – and the current demand – are clear indicators for a fundamental shift we see in Information Security and for Information Stewardship. In fact, all these solutions focus on enabling information sharing and allow users to share information in a secure but controlled way. This is in stark contrast to the common approach within IAM (Identity and Access Management) and IAG (Identity and Access Governance), where the focus is on restricting access.
Secure Information Sharing enables sharing, while the common approaches restrict access to information on particular systems. So it is about enabling versus restricting, but also about an information-centric approach (protect information that is shared) versus a system-centric concept (restrict access to information that resides on particular systems).
With the number of solutions available today, from point solutions to a comprehensive platform with broad support for heterogeneous environments – Microsoft Azure RMS – there are sufficient options for organizations to move forward towards Secure Information Sharing and enabling business users to do their job while keeping Governance, Compliance, and Information Risks in mind. Regardless of the business case, there are solutions available now for Secure Information Sharing.
It is time now for organizations to define a strategy for Secure Information Sharing and to move beyond restricting access. More on this at EIC Munich 2014.
November 21, 2013
Apparently, it’s a thing. I made some to go with roast pork. It was nice. We had it again tonight with fried salmon, green veg with cream and saffron, and pasta. It was nice again. So, here’s the vague recipe…
The pickling juice is cider vinegar, water and sugar in a 5:5:1 ratio. For three apples (sliced into big but thin chunks), I used 400 ml each of vinegar and water, and (obviously) 80g of sugar. I added black peppercorns, coriander seed, cloves and star anise. Probably I could’ve used less liquid or more apple.
It was pretty nice with the pork after 2-3 hours. Even nicer with the salmon a day later.
That is all.
I recently witnessed a real-life case of the ‘hot potato’ game between IT Security, the Line of Business and Corporate Compliance at one of my long time and loyal customers. The scene, almost as if directed by a Hollywood professional,...
Our guest blogger today is Kantara Member Rainer Hörbe. Rainer has been a contributor, architect and standards editor for the Austrian eGovernment federation. In the European cross-border eHealth federation project epSOS he served as security policy adviser. As a Member of Kantara Initiative, OASIS, and ISO SC27 he is engaged in developing models and standards in federated identity management.
Snapshot: SAML IOP Past, Present, and Future
I was invited to speak at the “Borderless eID workshop” on Nov 18th in The Hague to represent Kantara Initiative and discuss a harmonized approach for technical specifications in Europe, for example a uniform standardized SAML profile. The workshop allowed for each presenter to spend 5 minutes discussing the below:
- Rainer Hoerbe (Kantara Initiative): [technical interoperability using a harmonized SAML profile]“.
- Nils Fjelkegård (Swedish government) [interoperable national trust frameworks]
- Frank Leyman (Belgian government): [national eID and public/private sector]
- Mirjam Gerritsen (Dutch government and e-Recognition): [Compliance with STORK Assurance Levels]
- Herbert Leitold (Austrian government and STORK): [Mandates for persons and organizations]
One can get a quick snapshot of SAML interoperable products and services landscape here. Now, for my part of the agenda, I told a story that began with a timeline of SAML, progressed to explain benefits & threats, and concluded with an outlook toward the future. The timeline looks as follows:
- SAML was predominantly forged by OASIS, Liberty Alliance (which later evolved to become Kantara) and received major support from the Shibboleth people.
- OASIS SAML 1.0: no interoperability; 1.1: was frequently deployed in enterprises or bilateral federations.
- SAML 2.0 merged back to the Liberty Alliance fork. Around that time the focus moved to scalability in large federations with the WebSSO use case. This required the introduction of standardized configurations (metadata) and conformance profiles.
- Liberty Alliance conducted Interop events to certify products, Kantara Initiative continues to provide certification services.
- Beginning with the US eAuth (later FICAM) profile, Kantara developed the Kantara eGov SAML 2.0 profile, that is actually a general federation profile.
- During the last couple of years an ecosystem of products, tools and libraries has been growing around the Kantara eGov SAML 2.0 Interop Profile and related deployment profiles such as Saml2Int. Saml2Int continues its life-cycle within the Kantara Federation Interoperability WG (FIWG).
- Not adhering to technical interoperability can bite back at deployment time (see HealthCare.gov)
- Large SAML federations (mostly from research & education and government) usually rely on a SAML profile that is derived from Kantara eGov Interop Profile, scaling up to thousands of IdPs and SPs.
- Future: The plan is to merge the Kantara Interop Profile into the SAML 2.1 specs where Kantara will continue to provide testing program for SAML as well as other established and emerging protocol.
Varying community efforts to harmonize around SAML continue. If you would like to learn more and contribute to ongoing harmonization and certification efforts, I encourage you to join Kantara, eGovWG and FIWG.
Disclaimer: opinions expressed are that of the guest blogger and not necessarily reflective of a Kantara Initiative formal organizational position.
We have just released the 2.3.5 version of the UnboundID LDAP SDK for Java. You can get the latest release online at the UnboundID website or the SourceForge project page, and it's also available in the Maven Central Repository.
There are a lot of improvements in this release over the 2.3.4 version. A full copy of the release notes for this version may be found on the UnboundID website, but many of the improvements are related to connection pooling, load balancing, and failover, but there are other additions and a number of bug fixes also included in this release. Some of the most notable changes include:
Added a new fewest connections server set. If used to create a connection pool in which connections span multiple servers, the pool will try to send each newly-created connection to the server with the fewest number of active connections already open by that server set.
Updated the LDAPConnectionPool class to make it possible to specify an alternate maximum connection age that should be used for connections created to replace a defunct connection. In the event that a directory server goes down and pooled connections are shifted to other servers, this can help connections fail back more quickly.
Updated the failover server set to make it possible to specify an alternate maximum connection age for pooled connections that are established to a server other than the most-preferred server. This can help ensure that failover connections are able to fail back more quickly when the most-preferred server becomes available again.
Added a new version of the LDAPConnectionPool.getConnection method that can be used to request a connection to a specific server (based on address and port), if such a connection is readily available.
Added a new LDAPConnectionPool.discardConnection method that can be used to close a connection that had been checked out from the pool without creating a new connection in its place. This can be used to reduce the size of the pool if desired.
Added a new LDAPConnection.getLastCommunicationTime method that can be used to determine the time that the connection was last used to send a request to or read a response from the directory server, and by extension, the length of time that connection has been idle.
Updated the connection pool so that by default, connections which have reached their maximum age will only be closed and replaced by the background health check thread. Previously, the LDAP SDK would also check the connection age when a connection was released back to the pool (and this option is still available if desired), which could cause excess load against the directory server as a result of a number of connections being closed and re-established concurrently. Further, checking the maximum connection age at the time the connection is released back to the pool could have an adverse impact on the perceived response time for an operation because in some cases the LDAP SDK could close and re-establish the connection before the result of the previous operation was made available to the caller.
Updated the LDIF writer to add the ability to write the version header at the top of an LDIF file, to ensure that modify change records include a trailing dash after the last change in accordance with the LDIF specification, and to fix a bug that could cause it to behave incorrectly when configured with an LDIF writer entry translator that created a new entry as opposed to updating the entry that was provided to it.
Dramatically improved examples included in the Javadoc documentation. All of these examples now have unit test coverage to ensure that the code is valid, and many of the examples now reflect a more real-world usage.
Improved the quality of error messages that may be returned for operations that fail as a result of a client-side timeout, or for certain kinds of SASL authentication failures. Also improved the ability to perform low-level debugging for responses received on connections operating in synchronous mode.
Updated the in-memory directory server to support enforcing a maximum size limit for searches.
Added a couple of example tools that can be used to find supposedly-unique attribute values which appear in multiple entries, or to find entries with DN references to other entries that don't exist.
Made a number of improvements around the ability to establish SSL-based connections, or to secure existing insecure connections via StartTLS. Improvements include making it possible to specify the default SSL protocol via a system property so that no code change may be required to set a different default protocol, allowing the ability to define a timeout for StartTLS processing as part of the process for establishing a StartTLS-protected connection.
Fixed a bug that could cause the LDAP SDK to enter an infinite loop when attempting to read data from a malformed intermediate response.
Fixed a bug that could cause problems in handling the string representation of a search filter that contained non-UTF-8 data.
Please follow my DZone article on this important topic: http://architects.dzone.com/articles/saml-versus-oauth-which-one
Mobile Application Management is making waves
. Recent news from Oracle, IBM, and Salesforce highlight the market interest. It's a natural extension of what you've been hearing at Identity trade shows over the past few years (and this year's Gartner IAM Summit was no exception). The third platform
of computing is not a future state. It's here. And Identity and Access solutions are adapting to accommodate the new use case scenarios. ...onward and upward.
[Update - interesting discussion
of the IAM technology stack for mobile by SIMIEO]
In my last blog post I discussed Gartner's recently published report on market trends for Cloud-based Security Services and how Gartner ranked Cloud-based Identity and Access Management (aka "Cloud IAM" aka "Cloud Identity" aka "Identity-as-a-Service") as the largest market segment in this year's $2 billion cloud-based security services market. What struck me in the report is some of the concerns Gartner raised around cloud identity and most of them had to do with concerns regarding where identity is stored. I want to use this blog post to explore that issue in a bit more detail and correlate it to the Centrify approach for storing identity data.
November 20, 2013
Today, I read of at three separate instances where class-action lawsuits have been filed on behalf of people whose personal information had been breached at a healthcare company. The largest lawsuit, filed against TRICARE, represents 4.9 million affected individuals and is seeking damages of $1,000 per record – a total of $4.9 BILLION. Wow!
This action or other similar lawsuits have yet to be reach court or settlement. Depending on the outcomes, potential costs of litigation and resulting awards to victims may emerge as the single most powerful financial driver to implement good information security in the healthcare industry.
From the NASA Picture of the Day service:
Taking Flight at Cape Canaveral The United Launch Alliance Atlas V rocket with NASA’s Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft launches from the Cape Canaveral Air Force Station Space Launch Complex 41, Monday, Nov. 18, 2013, Cape Canaveral, Florida. NASA’s Mars-bound spacecraft, the Mars Atmosphere and Volatile EvolutioN, or MAVEN, is the first spacecraft devoted to exploring and understanding the Martian upper atmosphere.
I love this photo!