May 22, 2013
As I reviewed news stories about the tragic Oklahoma tornado, I couldn’t help but notice the stark contrast between a photo taken from far away and one taken up close and personal. The first photo is from NASA: “The image was captured on May 20, 2013, at 19:40 UTC (2:40 p.m. CDT) as the tornado began its deadly swath.”
The second is from a CBS News account on the day the storm hit: “A child is pulled from the rubble of the Plaza Towers Elementary School in Moore, Okla., and passed along to rescuers Monday, May 20, 2013.”
My thoughts and prayers go out to the people who are struggling to cope with the aftermath of this huge disaster. How wonderful to hear stories of the many, many people who are giving personal, selfless service to help the good people of Oklahoma.
I like the diagram Mark O’Neill of Vordel put in a recent post, “Identity is the New Perimeter.” That phrase has been floating around for some time, but I think this diagram illustrates the concept in the simplest, clearest way I have seen:
The article does a good job of describing this new way of looking at security. As Mark mentioned in the post, Bill Gates once said, “security should be based on policy, not topology.”
In April 2013 McAfee announced the addition of Identity and Access Management solutions to its Security Connected portfolio. The products that were previously developed and sold by Intel include McAfee Cloud Single Sign On and McAfee One Time Password. In addition to the products McAfee also introduced the new McAfee Identity Center of Expertise, staffed with experts in identity and cloud security. That free service will assist users with support pertaining to identity and access...
Today, I read an interesting white paper, “Big Data in M2M: Tipping Points and Subnets of Things,” published by Machina Research. From the introduction:
This White Paper focuses on three hot topics in the TMT space currently: Big Data and the ‘Internet of Things’, both examined through the prism of machine-to-machine communications. We have grouped these concepts together, since Big Data analytics within M2M really only exists within the context of heterogeneous information sources which can be combined for analysis. And, in many ways, the Internet of Things can be defined in those exact same terms: as a network of heterogeneous devices.
The white paper does a good job of exploring the emerging trends of the Internet of Things, potential business opportunities and challenges faced.
As one could expect, “authenticity and security of different kinds of data,” was identified as a big challenge:
Big Data is about “mashing up” data from multiple sources, and delivering significant insights from the data. It is the combination of data from within the enterprise, from openly available data (for example, data made available by government agencies), from data communities, and from social media. And with every different source of data arises the issues of authenticity and security. Machina Research predicts that as a result of the need for data verification, enterprises will have a greater inclination to process internal and open (government) data prior to mashing-up with social media.
The following diagram shows the increase security risk as more data from external sources is collected and analyzed.
This yet another indicator of how Identity and Access Management will be critical in the successful evolution of the Internet of Things.
May 21, 2013
Last week, I introduced my favorite topic—digital context—and laid out a plan for how to consider the case. Today, we’ll dive in with a real-world example, looking at how freeing context from across application silos helps us make more considered, immediate, and relevant access control decisions. For those of you who have been following along (and thanks for sticking with me in my madness), this is blog 8 in response to Ian Glazer’s provocative video on killing IAM in order to save it. And if you haven’t been with me from the beginning: I’m in favor of skipping the murder and going straight to the resurrection. Those of you who are coming in late to the game, here’s the recent introduction to context, or you can catch up with the entire story in order here: one, two, three, four, five, six, seven.
It All Starts with Groups: The Simple, Not Especially Sophisticated Solution
Let’s start first with the notion of groups and their implementation. On the surface, nothing could be more straightforward: If I have to manage a sizeable set of users and assign them different rights to applications, I need to categorize those users into groups with the same profile, whether that’s by function, role, need to know, hierarchy, or some other factor. This is the simplest approach to any categorization, creating some “relevant” labels, then assigning people that fit within those label to define groups.
So let’s say we’re creating groups based work functions, such as sales, marketing, production, and administration. All we need to do is list all the people under a particular function, create a label, and then assign this label to those people. Couldn’t be easier, right? The simplicity of the process explains the huge success of groups—and although we implementers tend to make fun of groups as crude categorizations, I would guesstimate that at least 90% of our authorization policies are still implemented through groups. (So much for all that talk about advanced fine-grained authorization! But I’m getting ahead of myself here…)
In fact, we’ve become so dependent on groups that in many cases, especially with sizeable organization where the business processes are quite refined and well managed, we’re seeing that there are often more groups than users! At first glance, this seems paradoxical—after all, what’s the point of regrouping people if you have more groups than people? But the joke is on us technical people because we ignored another key reality: the business one. Sure, we could have a lot of people, but generally a well-managed and productive organization can have more activities (or different aspects of a given activity) that require the multiplication of those groups. So we gave our users a simple mechanism to categorize people into groups, and they used it—talk about being a victim of our own success!
Basically, we played the sorcerer’s apprentice and our simple formula yielded a multiplication of groups, which quickly became un-manageable. So we went back to the formula and started to tweak it, creating groups inside groups, hierarchies of groups, and nested groups; introducing Boolean operations on groups; aggregating them into roles, and so on. So what we were just saying about groups being simple? Simple for whom? Simple for the group implementers—yes, definitely. Simple for a user in charge of the initial creation of the group—sure. But add any complexity into the mix and the chaos begins.
So Much for the Digital Revolution: Every Change, Managed Manually
From a computer’s point of view, the assignment of a user to a group is totally opaque—just an explicit list entered by the person in charge of creating the group. This explicit list contains no information about why or how a user is dispatched into or associated with a group. In short, the definition of membership rests with the group owner, which is fine on the face of it. But that excludes any automated assignment of a new member to the group without manual intervention of the group owner. That means every change must be entered by hand—imagine the complexity as people constantly change roles and shift responsibilities. And imagine how easy it would be for an overworked manager to miss removing the name of the person she just fired from just one of the groups he was part of. Now imagine the security risk if that guy’s still got access to sensitive files.
Without explicitly externalizing those rules, those policies, the administration of the system becomes tied to the group owners/creators. The effort of sub-categorizing with nested groups or introducing more flexible ways to combine groups by using Boolean operators just reveals the root of the problem: When you give users better ways to characterize their groups, you are forcing those users to either make explicit the formation rules of their groups—or continue to make every single change manually, even as those changes become more complex and unmanageable.
And that’s how we (re)discovered the value of attribute-based group definitions.
Machine-Readable Groups: Using Attributes to Simplify Management and Make Policies Explicit
We realized that if we wanted to automate, to simplify the management of all these groups, we needed to describe them at the lowest level as the set of attributes that defined a given group, role, and—yes—context. We discovered that groups and policies can be managed in a more finely-grained manner with increased automation (and greater productivity!) if we characterized them as a set of attributes, combining them with the usual arsenal of Boolean expressions and functions. Basically, we needed an explicit computer representation of this characterization, instead of leaving such definitions in the head of an overtaxed administrator, hoping that auto-magically our human semantic would be interpreted and executable by our machines.
So we looked at how we represented those policies, groups, and roles and saw that an attribute-based system was a necessary condition. But unless we go further with this the analysis, we run the risk of oversimplification, of coming up with a solution that’s simplistic, instead of elegantly simple—and that would only create another set of problems down the road.
So we could keep all the elements—group, subgroup, etc.—as separated “entities” and link them to a person, as in the first example above. Or we could fuse them together with the definition of a user, as we’ve done in second example. After all, both implementations can technically yield the same categorization, meaning you can get to the definition of the groups and subgroups you need with the right members in both solutions.
But semantically, we’re not talking about exactly the same thing. In one case, we have a notion of groups and subgroups separated from the definition of the person. In the other, we’ve bolted those groups and subgroups on as attributes of that person. So which one is the right definition? That all depends on what you need in your representation—by which I mean it’s contextual—but it’s very important for us to fully grasp the difference. The decomposition into attributes is key for fine-grained authorization, but unless we have a clear understanding about what we are doing, we can take the decomposition too far. In such a case, the world becomes a chaotic set of attributes, where we can’t see the forest for all those trees. While we can peer into a universe made up of the most elementary particles, most real-life problems demand that we recompose that world by gluing all those objects back together again.
Breaking It Down and Building It Back Up, Better Than Before
And that is where we begin to see the need to not only decompose the world into attributes, but also to reorganize that world into objects, relationships, and context. What you get through this reorganization of your information representation is a more complete view of your system, where authorization can be enforced in a more granular way. This is the way we really intend to do it in our policies, as we would define them in natural language—and that’s exactly what we’ll be looking at in my next blog post.
So thanks for reading this introduction to my favorite topic, and be sure to check back for a deep dive into objects, relationships, and context. I’ll even show you how a marketing coordinator and a computer can learn to speak the same language!
The post From Groups to Roles to Context: The Emergence of Attributes in Authorization appeared first on Radiant Logic, Inc
We covered the key role of attributes in my last blogpost, moving from the blunter scope of groups and roles to the more fine-grained approach of attributes. Now we’re going to take this progression a step further, as we narrow in on my favorite topic: digital context. (If you haven’t already, check out my first two posts on context, where I laid out the roadmap and looked at groups, roles, and attributes.) Our first order today is to travel back to logic class and think about predicates.* But Michel, you’re thinking, what does all this have to do with digital context? Well, one way to describe a context about something is to express it using sentences related to the question. While we will come back to the definition of context in a following post, for now let’s just say that we need some building blocks to express facts about the world, some form of sentences that can be interpreted by a computer, and logic is one of the tools for that.
Subject-Predicate-Object: First Order Logic 101
In my most recent post, we saw how the notions of groups and roles ended up in the increased use of attributes as a way to categorize or define identities. This should not be surprising. Behind this use of attributes lays a fundamental mechanism—a way to represent a simple fact. And it’s the same mechanism that we use when we reason based on the rules of formal logic, which has been in practice forever, or when we represent a fact on a computer (think SQL). In fact, one of the greatest achievements of the early 20th century has been the formalization of logic (needed for mathematic foundation) and computation. This type of logical representation is core to everything we do, as reasoned thinkers and as computer scientists.
But in case you’re a few years removed from logic class, let’s examine this mechanism at work by looking at some very simple diagrams about what we are doing when we associate some attribute with a person or an object, such as assigning a person to a group:
Or assigning a subgroup to a group:
Each of these constructs can be summarized by the following diagram:
In this diagram, a fact can be asserted by the notation: subject-predicate-object. In predicate logic (AKA first order logic), it’s conventionally written as predicate(X,Y), where the variables X and Y could be themselves objects (references to entities) and/or values (arbitrarily “quoted” labels belonging to the initial vocabulary of our logic system). For instance, in our example above, the fact that “Jane is member of the product marketing group” can be written as memberOf(“Jane”,”Product Marketing”) and subGroupOf(“Product Marketing”,“Marketing”).
These kinds of predicates are called “binary” predicates and they are quite common. So if there are binary predicates, the astute reader (that’s you!) might well wonder if there are also unary predicates and, more generally, n-ary predicates. Indeed, the unary predicate exists and generally it’s used to assign a label to an entity—so if we want to say that Jane is an executive, you would write it as executive(“Jane”). As for the n-ary predicate, well here’s where you will find the usual “n-slots” notation of entities/tables as they’re used in the relational/SQL world. So we’d see something like this: age(“Jane”, “33”) or employee(“Jane”, “33”,”product marketing”).
Now, if you look at all those diagrams above, you’ll notice they have a direction, an orientation that tells us which entity plays the role of subject, since the object for a given predicate cannot generally be substituted. This translates into a given order for the different slots of a predicate; for example, in the notation age(“Jane”, “33”), the first slot—“Jane”—is for the person, and the second—“33”—is for her age. Of course, there are always exceptions where the slots are permutable, such as the “brother binary predicate,” where if x is a brother of y—brother(“x”,”y”)— then y is also a brother of x, which could read: brother(“y”,”x”)= brother(“x”,”y”). But in general order, orientation matters.
The diagrams above form directed graphs and the orientation is essential for preserving the semantics of this representation. After all, saying that x kills y—Kill(“x”,”y”)—is very different from saying that y kills x—Kill(“y”,”x”)!
Essential Semantics: Describing Our World in First Order Sentences
So all this is great, but what does it have to do with context? Stay with me here…we’ve seen that when we reduce everything into attributes, we are reducing the world to first principles. But at the same time, by associating attributes to an entity and recombining them progressively through predicates, we are describing a complete world based on “sentences” of first order logic. If you combine those sentences with the usual Boolean operators (Not, And, Or, and the rest of the derived Boolean Zoo members), you get a world that’s pretty complete—complete enough to act as the foundation of mathematics.
And the good news here is that this world is also pretty close to our own “world of discourse” (albeit a lot like my English: awkward and somewhat robotic). Basically, it’s made of simple sentences in the form of subject-predicate-value (where the predicate is the adjective or qualifier), or subject-attribute-object (where the attribute is the verb). Remember our friend Jane from above? Here are some things we know related to Jane:
Jane is member of marketing group.
Product marketing is subgroup of marketing group.
The beauty of the predicate representation is that a huge part of our digital world is already encoded this way. In fact, all of our so-called “structured information”—databases, transactions, etc—runs according to these principles. But the maze of protocols and security representations we’re all dealing with, from SQL, to LDAP, to APIs, to programming languages, has long masked this reality. We need a way to rise above this modern tower of Babel, a way to translate all that structured, transactional data into something more useful, more contextually-driven. In my next post, I’m excited to show you that we’ve done exactly that: returned to first principles to deliver a “contextual and computational language” that’s as easy to interpret at the human level as it is to execute at the machine level. And this is a huge leap forward. We know we can’t teach our marketing teams to think like machines—and believe me, I’VE TRIED—but imagine a world where a business person and an application can both understand, and act on, the exact same notation. Such a world is possible today…so do not miss my next post!
PS: Some of you have been in on this series from the beginning, but all this blogging began as a response to Ian Glazer’s video on killing IAM in order to save it. For those of you just joining the story, you can catch up with the entire story here: one, two, three, four, five, six, seven.
*See what I did there? That was for all the mathematicians…and for Anil John, who’s just as a big a logic geek as I am.
The post Attributes, Predicates, and Sentences: The Building Blocks of Context appeared first on Radiant Logic, Inc
In Dave Kearns
Another European Identity (and Cloud) Conference has come and gone, and once again it was an exciting week with packed session rooms, and excellent attendance at the evening events. I’m not sure we can continue to call it the “European” Id Conference, though, as I met folks from Australia, New Zealand, Japan, South Africa and all over north and south America. And lots of Europeans, also, I should note. Nor were the attendees content to sit back and soak it all in. At least in the sessions I conducted there was a great deal of give and take between the audience and the speakers and panelists. Most good natured and looking for information but – occasionally – it got a bit raucous.
The track on authentication and authorization – so near and dear to my heart – drew a standing room only crowd who were eager to join in the discussion. As always when AuthN is discussed, passwords drew an inordinate amount of the discussion. I reminded the panelists and the audience that no less a personage than Bill Gates predicted the “death of passwords” back in 2004. And that even within Microsoft, passwords were still in use.
Too much energy is being spent of both trying to remove username/password from the authentication process and in trying to “strengthen” the passwords that are used. Neither approach is going to be effective. Passwords, or the “something you know” are far easier to use than “something you have” (security token) and far less scary than “something you are” (biometrics) for the general public to ever entertain the idea of switching.
Password strength is, essentially, a myth. Brute force attacks become quicker every day, so hacking the password directly becomes easier every day. Phishing attacks are getting so sophisticated that there’s no need to hack a password (and possibly set off security alarms) when you can induce the user to give it to you willingly.
Two factor authentication (2FA) had some champions, but most methods have already been shown to be vulnerable to either direct attacks (man in the middle style, or MIM) or the same phishing attacks that subvert “strong” passwords. The object of the phishing attack is, after all, for the user to login with their credentials which are then subsumed by the hacker. So go three factors if you want – it’s not much stronger.
I found widespread agreement (with a few diehard holdouts) for a context-collecting risk-based system for Access Control (which I’ve called RiskBAC). Knowing the who, what, when, where, how and why of the authentication ceremony leaves the username/password combo as only one of many factors (the who). In fact, entering a username and correct password isn’t the end of the authentication but merely the trigger to begin the Risk-based Access ceremony or transaction. The other factors are all gathered automatically through system dialogs after the entry of the password has identified the account to which the claimant wishes access.
Of course, once we’re satisfied that the claimant is most likely who he/she claims to be, we then take that information into account along with the other contextual elements to determine the degree of access we’ll authorize to the resource they’re seeking.
While the presentation was called “the Future of Authentication and Authorization,” I did remind the audience that over 2000 years ago the Romans used the same methods for access control. Biometrics (what you are) was represented by facial recognition, tokens (what you have) by scrolls sealed with the leader’s ring (early use of a security signature) and passwords were, well passwords – and often changed daily to guard against leaks of the information, something more of us should do today.
There was also a contextual element to the access control ceremony when the guard, on observing the claimant, was able to identify him in the context of where he knew the face from – the morning roll call, or the guardhouse. The sealed scroll had context based on what the guard knew about the location (at the camp or thousands of miles away) and condition (alive and kicking, or breathing his last) of the official who sealed the token.
There were lots of other exciting moments – even aha! Moments – in the tracks I did on Trust Frameworks and Privacy by Design as well as in others’ session especially those on Life Management Platforms, a coming technology that many who were hearing about it for the first time agreed will be game-changing when it arrives – and that may not be too far off. If you’d like to catch up, see the just released Advisory Note: “Life Management Platforms: Control and Privacy for Personal Data” (#70745).
And there was exciting, non-Identity related, news as well. We of course announced EIC 2014 for next May but – remember up at the top of this post I said that it was a larger than European conference? Well we also announced EIC 2014 London, EIC 2014 Toronto and EIC 2014 Singapore. EIC is going worldwide, and the people involved in identity couldn’t be happier. Dates for the new venues haven’t been finalized yet, but I’ll be sure to tell you about them when they are.
Many of WAYF's identity providers are unable to deliver e-mail addresses for their users. The reason is that many institutions no longer run e-mail systems of their own, and so are no longer able to deliver this kind of information. As a result, WAYF now changes the official status of the mail attribute, from MUST to MAY. WAYF thus no longer guarantees its connected services the delivery of a valid e-mail address for every user attempting to log in.
Mit dem Garancy Access Intelligence Manager hat die Beta Systems AG eine neue, spezialisierte Lösung für die Analyse von Zugriffsberechtigungen auf den Markt gebracht. Wie der Produktname schon sagt, handelt es sich um eine Lösung für „Access Intelligence“, einen Teilbereich von IAG (Identity and Access Governance). Access Governance-Lösungen bieten üblicherweise bereits integrierte Reporting-Funktionen, um die gesammelten Informationen über...
After the recent wrestling match in the blogosphere that included vendors and analysts on XACML, I want to provide some best practices for access control/authorization.
The wrestling match is covered in my earlier post
Let me insert my favorite punch line before I mention the best practices.
Authentication is finite while Authorization is infinite.
Best practices for access control:
1. Know that you will need access control/authorization.
Too many times architects spend majority of their system security design time on authentication and federated identity. This leads to limited time provided to authorization. Compared to authentication, authorization can get very complex over time.
2. Externalize the access control policy processing
You are headed toward disaster if your access control processing is embedded in your application. This is because access control requirements are never complete during the first phase of application development. Authorization rules or requirements change over the application lifecycle as business needs or environment change. If the access control processing is not decoupled from the application, you will face hardship. Lots of band-aid will be applied to the application code to meet the changing/ever-growing authorization requirements.
3. Understand the difference between coarse grained and fine grained authorization
Google/Bing will help you understand the difference. Wikipedia will definitely help you here. Application designers tend to create a model of authorization (for simplicity) during initial design. Almost always, this model tends to be a simple coarse grained authorization model. The challenge is that the read world authorization needs for your application is not set in stone. It is an ever changing phenomenon that will just pull your model in all directions.
4. Design for coarse grained authorization but keep the design flexible for fine grained authorization
This goes in line with item 2 where the access control policy has to be separated or decoupled from your application. If your initial design for the access control system or library is designed for coarse grained authorization, because of the low coupling, it becomes easier to incorporate fine grained authorization logic over time.
5. Know the difference between Access Control Lists and Access Control standards
Access Control Lists (ACL) are pretty popular among system designers. The challenge is that they are proprietary and not usable across applications or domains. You may earn your bonus or accolades using ACLs in your application. Over time, they tend to become restrictive due to changing requirements.
There are 2 prominent access control standards that I list here:
a) IETF OAuth2: this is a REST style Internet Scale lightweight resource authorization framework.
b) OASIS XACML: standard for fine grained authorization. Has an access control architecture namely PEP (Policy Enforcement Point), PDP (Policy Decision Point), PIP (Policy Information Point) and PAP (Policy Administration Point).
|Fig: Typical XACML Fine Grained Access Control Architecture|
6. Adopt Rule Based Access Control : view Access Control as Rules and Attributes
Access Control should be viewed as rules on various entities (and their attributes) involved in the authorization check.
I am not forcing you to use XACML. But I would certainly encourage you to design your access control system in terms of rules and attributes. Have a look at my article on Access Control Strategies
. It is critical that you design your access control system as rules and attributes.
Hey, Drools based access control system is certainly not bad as long as you decouple the access control system. It is a trade off between proprietary rigid ACLs and flexible fine grained XACML. You can manage your Drools Rules via Guvnor.
7. Adopt REST Style Architecture when your situation demands scale and thus REST authorization standards
With the growing demand for web based services and APIs and the proliferation of mobile devices in the world, it has become essential to incorporate REST style architecture to your system design.
It is essential for you to use OAuth2 standard for REST authorization. While OAuth2 takes care of defining the tokens and some rules for authorization (scope of authorization and actor/resource), it may still be essential for system architects to incorporate fine grained authorization. Certainly give a look at the REST Profile of XACML v3. There is also JSON binding available.
8. Understand the difference between Enforcement versus Entitlement model
Prominent access control strategies and standards involve the Enforcement model. The access control system is trying to enforce access to a resource. This leads to a Yes/No type question. The enforcement model does not scale in a cloud or a resource constrained environment.
Entitlement model is where in the access control system does not perform enforcement or access checks. Rather it answers questions such as "What permissions does this user have?". The question seeker will then use the returned answer to perform local enforcement.
|Cloud Enforcement vs Entitlement Model|
May 20, 2013
It has been a few weeks since I last blogged and it's definitely time I get back into it. Since the beginning of February we (a) launched a major upgrade to Centrify Suite for UNIX/Linux/Mac, (b) entered the Windows privilege management market with DirectAuthorize for Windows; (c) are now fully participating (and doing quite well out of the gates) in the cloud identity management market with Centrify for SaaS; and (d) launched a major partnership with Samsung. And the nice thing is that this product and technology momentum is also being replicated in other areas of our business.
This promises to be a good comments thread.
can you come up with some examples of sentences that would be incomprehensible (without explanation) to a denizen of 2003 that don't revolve around ephemeral tech or pop culture churn? And can you provide and deconstruct some sentences from 2023 that, if we had sufficient foresight, we ought to be able to understand and interpolate a context for?
My fav so far. "Skype trojan forces Bitcoin mining, security firm warns"
The language of alienation - Charlie's Diary »
Some examples, culled from reddit, to get you started: hang2er: "I can't get a 4G signal here, I'll skype you on my droid as soon as I hit a hotspot, I need a coffee anyway." Retinence: "The headline, 'Galaxy Nexus: Android Ice Cream Sandwich guinea pig.'" (But tech is easy ...) ...
[from: Google+ Posts
May 19, 2013
Chaipuccino is not a thing, no matter what Starbucks may say. If you run a cafe and you have Chai tea bags as well as the usual English Breakfast, then congratulations. But putting hot frothed milk in a fancy tea pot, adding a chai tea bag and serving it with a fancy cup is just plain wrong. Please just treat it like Workman's Tea. A mug, tea bag, boiling water and a splash of milk once its brewed a bit is fine.
And Starbucks, no thanks for the Chai Tea Latte. Maybe some people like it, but I reckon that's just wrong as well.
[from: Google+ Posts
May 18, 2013
One of the first steps taken to protect a system from authentication errors is the determination of its assurance level requirement. That risk assessment process takes as input potential harm and likelihood of harm. This blog post looks at the applicability of the likelihood factor when assessing assurance level requirements for Internet connected systems.
The classic "E-Authentication Guidance for Federal Agencies (OMB-M04-04) [PDF]" defines risk from authentication error as a function of two factors: (a) potential harm or impact and (b) the likelihood of such harm or impact. The categories of harm and impact and how to apply them, per OMB-04-04, can be found in my earlier blog post on HOW-TO Conduct a Risk Assessment to Determine Acceptable Credentials.
The key point to note is that most risk assessment methodologies allow for “tuning” the risk using a “likelihood of harm/impact” factor, which looks something like this:
Risk of Authentication Error = Potential Impact/Harm * Likelihood of Impact/Harm
But how does one determine the "likelihood of harm" number? The two classic approaches are to explore "base rates" or to consult with experts. But there is a gotcha with experts:
The simplest and most intuitive advice we can offer [...] is that when you’re trying to gather good information and reality-test your ideas, go talk to an expert. Here’s what is less intuitive: Be careful what you ask them. Experts are pretty bad at predictions. But they are great at assessing base rates.
Decisive: How to Make Better Choices in Life and Work
So a prediction by an expert may not be all that valuable. But what about the base rates? My concern there is the constantly evolving threat environment that is the Internet, and how base rates that are based on past data are an unreliable predictor of the future.
So my recommendation in this particular case is rather simple. In this type of evaluation set the "likelihood" factor equal to 1. DO NOT discount the likelihood of harm, and ALWAYS assume there is a likelihood of harm:
Risk of Authentication Error = Potential Impact/Harm * 1
What that means is that, if as part of your assurance assessment you need to factor in the impact or harm from an alien invasion, do not discount the likelihood! Stand firm, fully account for it, and put into place compensating controls to mitigate the consequences.
These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer
Something to get lost in. http://electronicexplorations.org/?show=zhou
Fairly short and quirky mix of tunes "that I would want to listen to". Recommended.
"I chose to focus on the less dance floor orientated sounds for this mix and instead tried to compile a selection of tunes that I would want to listen to. It is a mix highlighting some of the music currently coming out of Bristol that I find most exciting as well as tracks that have informed the music we make ...
[from: Google+ Posts
Today is National Buttermilk Biscuit Day. Biscuits fill me with joy, as do community integrations, so here's a post packed with deliciousness from the amazing people in the Stormpath community. (First, here's an awesome biscuit recipe. Happy Biscuit Day!)
- CAS-Addons, now with Richer Stormpath Support
- Python Login Skeleton for Stormpath
CAS-Addons, now with Richer Stormpath Support
, the team at Unicon released CAS 3.5 Integration with Stormpath
, which allows Stormpath
to be used as a primary authentication source for CAS servers. They just added the ability to source Stormpath attributes and expose them as regular CAS Principal attributes. To quote Dmitriy at Unicon, "No need for a complex IPersonDirectoryDao impl, etc. Just a rich StormpathPrincipal encapsulating Account instances."
He also added custom XML namespace support for Stormpath-related beans. The authentication manager element now contains all the Stormpath-related objects. For example, to define a top-level authentication manager containing Stormpath handler and attributes resolution, one would simply need to do this:
- Top level AuthenticationManager bean definition
- List of handlers with default HttpBased handler and StormpathAuthenticationHandler
- List of principal resolvers with default HTTP principal resolver and StormpathPrincipalResolver (which automatically exposes Stormpath Account data as CAS Principal attributes)
...and eliminates any boilerplate bean definition constructs.
Python Login Skeleton for Stormpath
Brian Peterson just released a simple and very intuitive login skeleton for Stormpath that uses the Stormpath Python SDK. This makes it really (I mean, really) easy for Pythonistas to use and understand Stormpath.
He also did a great job of explaining and diagramming the actions of the SDK. Fork it, play with it, send him (and us!) your suggestions and pull requests. As we roll out the Python SDK update, which will include 2.7 support as well as a simplifying refactor, we'll also be updating this handy tool. Nice work!
May 17, 2013
Shock horror. Festivals are expensive and only middle aged, middle class people can afford it.
Which explains how white, middle aged and middle class, Glastonbury can appear to be. (sez, the balding old git).
[from: Google+ Posts
Access Risk Management Blog | Courion
Securing an enterprise is no mean feat and is made more difficult by the rapidly expanding use of software in the Cloud. Although security is often cited as a concern with a move to the Cloud, what may not be fully appreciated is how cloud computing amplifies the existing risks of how to best manage millions, if not billions of identity and access relationships.
Check out this article by Kurt Johnson, Courion VP of Strategy and Corporate Development, to learn about the need for real-time access intelligence to manage the risk of improper access to systems and resources that span the enterprise and the Cloud, as well as how organizations can reduce risks before they become bona fide breaches.
Click here to read the full story.
Students from a range of educational institutions now have the ability to confirm, through WAYF, their student status with Mecenat, thereby obtaining access to purchasing discounted items from Mecenat's business partners. Educational institutions with an interest can get further information from Lasse Urth of Mecenat (phone +45 2851 2171).
People employed at institutions using e-recruitment solutions from peopleXS now have the ability to log into the peopleXS online service using their institutional login, through WAYF. In case of interest, contact peopleXS for further information.
I am not happy with the FIDO Alliance and their FAQ
do not eliminate my concerns.
The major concern beeing: "Why isn't this going straight to a standards body?"
The FIDO authentication protocol needs to be part of a standardized,
interoperable ecosystem to be successful. Building this ecosystem
requires the active commitment of everybody from hardware chipset
vendors, to the manufacturers of back-end server systems. Coordination
across the divergent interests of these players is a complex affair, and
one that current technical standards bodies are not well suited to
The FIDO Alliance will refine the protocol, and monitor the
extensions required to meet market needs and to make the protocol robust
and mature. Implementation will not be undertaken by the FIDO
Alliance. The mature protocol will be presented to the IETF, W3C or
similar body after which it will be open to all industry players to
This is what standardization bodies working groups are for. Work on protocols and formats. Work on security considerations. Use the experience of "the community".
So FIDO is developing a protocol and will then present it to one standardization body...
Meanwhile it is a closed thing and it costs relevant amounts of money
to join the alliance.
This neither free nor open.
During IIW there were several sessions on FIDO (1
). Each full of good intentions and marketing speek but no substance. No real information. You have to join the alliance to get that. Well, ...
Somebody at Nok Nok Labs
convinced somebody at Paypal to hire them and found FIDO. Why Google joined despite Google's support for the W3C WebCrypto
group I have no idea.
The W3C WebCrypto group is were this belongs. This might need rechartering
of the group. But that is doable. Especially if the proposal is backed by a prototype implementation. Especially if it is backed by by Paypal, Lenovo, Google, Nxp and others
I believe that we need better authentication methods beyond username and password. I think that bring your own (hardware) identiy might work to that goal. I believe that mobile phones, and SIM cards and NFC help to achieve this. I believe that the mobile wallet is the right user interface to choose your identity.
I believe that doing it in a closed group is not the right way.
May 16, 2013
The 2013 edition of the European Identity & Cloud Conference just finished. As always KuppingerCole Analysts has created a great industry conference and I am glad I was part of it this year. To relive the conference you can search for the tag #EIC13 on Twitter.
KuppingerCole manages each time to get all the Identity thought leaders together which makes the conference so valuable. You know you’ll be participating in some of the best conversations on Identity and Cloud related topics when people like Dave Kearns, Doc Searls, Paul Madsen, Kim Cameron, Craig Burton … are present. It’s a clear sign that KuppingerCole has grown into the international source for Identity related topics if you know that some of these thought leaders are employed by KuppingerCole themselves.
Throughout the conference a few topics kept popping up making them the ‘hot topics’ of 2013. These topics represent what you should keep in mind when dealing with Identity in the coming years:
XACML and SAML are ‘too complicated’
It seems that after the announced death of XACML everyone felt liberated and dared to talk. Many people find XAMCL too complicated. Soon SAML joined the club of ‘too complicated’. The source of the complexity was identified as XML, SOAP and satellite standards like WS-Security.
There is a reason protocols like OAuth, which stays far away from XML and family, have so rapidly gained so much followers. REST and JSON have become ‘sine qua none’ for Internet standards.
There is an ongoing effort for a REST/JSON profile for XACML. It’s not finished, let alone adopted, so we will have to wait and see what it gives.
That reminds me of a quote from Craig Burton during the conference:
Once a developer is bitten by the bug of simplicity, it’s hard to stop him.
It sheds some light on the (huge) success of OAuth and other Web 2.0 API’s. It also looks like a developer cannot be easily bitten by the bug of complexity. Developers must see serious rewards before they are willing to jump into complexity.
OAuth 2.0 has become the de-facto standard
Everyone declared OAuth 2.0, and it’s cousin OpenID Connect, to be the de facto Internet standard for federated authentication.
Why? Because it’s simple, even a mediocre developer who hasn’t seen anything but bad PHP is capable of using it. Try to achieve that with SAML. Of course, that doesn’t mean it’s not without problems. OAuth uses Bearer tokens that are not well understood by everyone which leads to some often seen security issues in the use of OAuth. On the other hand, given the complexity of SAML, do we really think everyone would use it as it should be used, avoiding security issues? Yes, indeed …
A lot of talk about the ‘API Economy’. There are literally thousands and thousands of publicly available APIs (named “Open APIs”) and magnitudes more of hidden APIs (named “Dark APIs”) on the web. It has become so big and pervasive that it has become an ecosystem of its own.
New products and cloud services are being created around this phenomena. It’s not just about exposing a REST/JSON interface to your date. You need a whole infrastructure: throttling services, authentication, authorization, perhaps even an app store.
It’s also clear that developers once more become an important group. There is nu use to an Open API if nobody can or is willing to use it. Companies that depend on the use of their Open API suddenly see a whole new type of customer: developers. Having a good Developer API Portal is a key success factor.
Context for AuthN and AuthZ
Manye keynote and presentations referred to the need for authn and authz to become ‘contextual’. It was not entirely sure what was meant with that, nobody could give a clear picture. No idea what kind of technology or new standards it will require. But it was all agreed this was what we should be going to
Obviously, the more information we can take into account when performing authn or authz, the better the result will be. Authz decisions that take present and past into account and not just whatever is directly related to the request, can produce a much more precise answer. In theory that is …
The problem with this is that computers are notoriously bad at anything that is not rule based. Once you move up the chain and starting including the context, next the past (heuristics) and ending at principles, computers are giving up pretty fast.
Of course, nothing keeps you from defining more rules that take contextual factors into account. But I would hardly call that ‘contextual’ authz. That’s just plain RuBAC with more PIPs available. It only becomes interesting if the authz engine is smart in itself and can decide, without hard wiring the logic in rules, which elements of the context are relevant and which aren’t. But as I said, computers are absolutely not good at that. They’ll look at us in despair and beg for rules, rules they can easily execute, millions at a time if needed.
The last day there was a presentation on RiskBAC or Risk Based Access Control. This is situated in the same domain of contextual authz. It’s something that would solve a lot but I would be surprised to see it anytime soon.
Don’t forget, the first thing computers do with anything we throw at them, is turning it into numbers. Numbers they can add and compare. So risks will be turned into numbers using rules we gave to computers and we all know what happens if we, humans, forgot to include a rule.
Graph Stores for identities
People got all excited by Graph Stores for identity management. Spurred by the interest in NoSQL and Windows Azure Active Directory Graph, people saw it as a much better way to store identities.
I can only applaud the refocus on relations when dealing with identity. It’s what I have been saying for almost 10 years now: Identities are the manifestations of relationship between two parties. I had some interesting conversations with people at the conference about this and it gave me some new ideas. I plan to pour some of those into a couple of blog articles. Keep on eye on this site.
The graph stores themselves are a rather new topic for me so I can’t give more details or opinions. I suggest you hop over to that Windows Azure URL and give it a read. Don’t forget that ForgeRock already had a REST/JSON API on top of their directory and IDM components.
Life Management Platforms
Finally there was an entire separate track on Life Management Platforms. It took me a while to understanding what it was all about. Once I found out it was related to the VRM project of Doc Searls, it became more clear.
Since this recap is almost getting longer than the actual conference, I’ll hand the stage to Martin Kuppinger and let him explain Life Management Platforms.
That was the 2013 edition of the European Identity & Cloud Conference for me. It was a great time and even though I haven’t even gotten home yet, I already intend to be there as well next year.
This morning, I was read a recent Oracle White Paper entitled, “Transforming Customer Experience: The Convergence of Social, Mobile and Business Process Management.” It gave interesting perspective on the blending of emerging paradigms – mobile and social – with the older discipline of Business Process Management.
To stay ahead in today’s rapidly changing business environment, organizations need agile business processes that allow them to adapt quickly to evolving markets, customer needs, policies, regulations, and business models. … Social and mobile business models have already contributed important new frameworks for collaboration and information sharing in the enterprise. While these technologies are still in a nascent state, BPM and service oriented architecture (SOA) solutions are well established, providing a history of clear and complementary benefits.
The key is effectively leveraging the strengths of existing, proven architectures while taking advantage of new opportunities:
The term “Social BPM” is sometimes used to describe the use of social tools and techniques in business process improvement efforts. Social BPM helps eliminate barriers between decision makers and the people affected by their decisions. These tools facilitate communication that companies can leverage to improve business processes. Social BPM enables collaboration in the context of BPM and adds the richness of modern social communication tools.
… Social BPM increases business value by extracting information from enterprise systems and using it within social networks. Meanwhile, social technologies permit employees to utilize feedback from social networks to improve business processes.
I found one use case presented in the paper to be particularly instructive. As illustrated in the following diagram,
A claims management system assigns a task to an individual claims worker with the expectation that the user will complete the task to advance the process. Of course, to accomplish this type of knowledge-based task, the individual must often engage other people within the business .
However, Social BPM enables the use of social networking tools to extend collaboration beyond the traditional enterprise boundaries, as shown in the following diagram:
Not only can internal knowledge workers use social networking tools to find each other and share information, but also customers can interact with the process at specific steps, using mobile devices, to supply their own information into a business process. For example, a customer involved in an auto accident might upload photos taken with a cell phone into the process via a claims management app provided by the insurance company.
In order to make this all work, participants will need to use both enterprise and social identity credentials. Because they are using mobile devices, the IAM system must accommodate mobile, social and cloud infrastructures in order to effectively use information. This is very much in line with the principles set forth in the Gartner Nexus I addressed yesterday.
Realists have no idea how they ended up living on this once hospitable planet with all these fools
Chinese Demand, Peak Oil And Realism - Decline of the Empire »
This is the third and final day of my spring fundraiser. If you value this website, consider making a donation via the Donate (Paypal) button on this page, or by sending a check or money order to the PO Box I gave you in Tueday's post. Thanks â€” Dave [Tony Judt's book Ill Fares the Land] has a touch of prophecy in the authentic sense of that term. Prophecy is not about foretelling the future; it is about warning those in the present that unless th...
[from: Google+ Posts
16 May (tonight), MS Stubnitz, Canary Wharf, London for some Real time, Algorithmically Generated Techno wholly or predominantly characterised by the emission of a succession of repetitive conditionals.
Time to dust off the Music Tech dissertation and rhythm generator using Markoff Chains in the time domain.
London (MS Stubnitz) Algorave on 16th May 2013 »
When: 7pm-11:30pm, Thursday 16 May 2013 Where: MS Stubnitz, Montgomery Street, Canary Wharf tube, London E14 9SB Tax: £9 advance tickets (or plenty on the door for £10) We're back on-board the MS S...
[from: Google+ Posts
May 15, 2013
I’m pleased to report that OAuth 2.0 has won the 2013 European Identity Award for Best Innovation/New Standard. I was honored to accept the award from Kuppinger Cole at the 2013 European Identity and Cloud Conference on behalf of all who contributed to creating the OAuth 2.0 standards [RFC 6749, RFC 6750] and who are building solutions with them.
Today I read a year-old document published by Gartner, entitled, “The Nexus of Forces: Social, Mobile, Cloud and Information.” It explains the interaction among these market forces better than any single document I have read:
Research over the past several years has identified the independent evolution of four powerful forces: social, mobile, cloud and information. As a result of consumerization and the ubiquity of connected smart devices, people’s behavior has caused a convergence of these forces.
In the Nexus of Forces, information is the context for delivering enhanced social and mobile experiences. Mobile devices are a platform for effective social networking and new ways of work. Social links people to their work and each other in new and unexpected ways. Cloud enables delivery of information and functionality to users and systems. The forces of the Nexus are intertwined to create a user-driven ecosystem of modern computing. (my emphasis added)
Excerpts from Gartner’s treatment of each of these areas include:
Social is one of the most compelling examples of how consumerization drives enterprise IT practices. It’s hard to think of an activity that is more personal than sharing comments, links and recommendations with friends. Nonetheless, enterprises were quick to see the potential benefits. Comments and recommendations don’t have to be among friends about last night’s game or which shoes to buy; they can also be among colleagues about progress of a project or which supplier provides good value. Consumer vendors were even quicker to see the influence — for good or ill — of friends sharing recommendations on what to buy.
Mobile computing is forcing the biggest change to the way people live since the automobile. And like the automotive revolution, there are many secondary impacts. It changes where people can work. It changes how they spend their day. Mass adoption forces new infrastructure. It spawns new businesses. And it threatens the status quo.
Cloud computing represents the glue for all the forces of the Nexus. It is the model for delivery of whatever computing resources are needed and for activities that grow out of such delivery. Without cloud computing, social interactions would have no place to happen at scale, mobile access would fail to be able to connect to a wide variety of data and functions, and information would be still stuck inside internal systems.
Developing a discipline of innovation through information enables organizations to respond to environmental, customer, employee or product changes as they occur. It will enable companies to leap ahead of their competition in operational or business performance.
Gartner’s conclusion offers this challenge:
The combination of pervasive mobility, near-ubiquitous connectivity, industrial compute services, and information access decreases the gap between idea and action. To take advantage of the Nexus of Forces and respond effectively, organizations must face the challenges of modernizing their systems, skills and mind-sets. Organizations that ignore the Nexus of Forces will be displaced by those that can move into the opportunity space more quickly — and the pace is accelerating.
So, what does this mean for Identity and Access Management? Just a few thoughts:
- While “Social Identity” and “Enterprise Identity” are often now considered separately, I expect that there will be a convergence, or at least a close interoperation of, the two areas. The boundaries between work and personal life are being eroded, with work becoming more of an activity and less of a place. The challenge of enabling and protecting the convergence of social and enterprise identities has huge security and privacy implications.
- We cannot just focus on solving the IAM challenges of premised-based systems. IAM strategies must accommodate cloud-based and premise-based systems as an integrated whole. Addressing one without the other ignores the reality of the modern information landscape.
- Mobile devices, not desktop systems, comprise the new majority of user information tools. IAM systems must address the fact that a person may have multiple devices and provide uniform means for addressing things like authentication, authorization, entitlement provisioning, etc. for use across a wide variety of devices.
- We must improve our abilities to leverage the use of the huge amounts of information generated by mobile/social/cloud platforms, while protecting the privacy of users and the intellectual property rights of enterprises.
- Emerging new computing paradigms designed to accommodate these converging forces, such as personal clouds, will require built-in, scalable, secure IAM infrastructure.
- The Gartner Nexus doesn’t explicitly address the emergence of the Internet of Things, but IoT fits well within this overall structure. The scope of IAM must expand to not only address the rapid growth of mobile computing devices, but the bigger virtual explosion of connected devices.
We live in an interesting time. The pace of technological and social change is accelerating. Wrestling with and resolving IAM challenges across this rapidly changing landscape is critical to efforts to not only cope with but leverage new opportunities caused by these transformative forces.
Today, services like authorization and authentication are delivered via APIs: JSON / REST HTTP “endpoints.” Some of the most popular authentication API’s on the Internet are using different profiles of OAuth2. Because consolidation increases efficiency, Google, Microsoft, Yahoo, and others came together to define one standard profile for OAuth 2.0 authentication: OpenID Connect.
OpenID Connect documents a single profile of OAuth2 that can be used by any Internet domain. One standard for domain authentication will simplify security for application developers (web and mobile), make end users more secure, and enable easier integration of mobile devices and cloud agents.
See Toshiba Cloud TV in Action.
Specifically, OpenID Connect defines several endpoints to enable domains to offer : (1) user authentication; (2) client registration; (3) client authentication; (4) user claims; (5) client claims; and (6) discovery. Industry analysts are predicting that OpenID Connect is on a trajectory for significant adoption. The standard should be finalized by the end of 2013. Nat Sakimura (NTT) , Vice-Chairman of the OpenID Foundation, has said this about OpenID Connect: “we are done apart from formalities.”
For reasons like these, Toshiba decided in 2012 to align with OpenID Connect. As Gluu’s open source “OX” platform performed well in the OpenID Connect OpenID Provider (“OP”) Internop, Toshiba decided it was preferable to use OX rather than write their own implementation.
Learn more about OpenID Connect via slides from Microsoft’s Michael B. Jones.
The partnership with Toshiba has driven the implementation of a number of features to the OX platform. For example, they wanted to build a highly available “cluster” of authentication servers delivered across multiple geographic regions to ensure business continuity. This would enable Toshiba engineers to take a server out for maintenance, and just add it back later.
Toshiba has also been helpful with testing and benchmarking. OX has been in production there since last year, so we have also been able to observe the behavior of the platform over time, while handling significant load.
Gluu has also built features to enable Toshiba to use the central publication of multi-party federation metadata to enable globally delivered websites to trust identity providers in different regions (Japan, US, and Europe) without persisting any personally identifiable data outside of the region. Although JSON multiparty federation metadata is not currently a feature of OpenID Connect, Gluu has documented its implementation at the OpenID Foundation in the Emerging Work Section, and hopes it will be included in a subsequent release: http://wiki.openid.net/w/page/59727624/Multi-Party%20Federations
Toshiba is keen to promote the OX open source platform within the SmartTV Alliance, which is why they authorized the May 1, 2013 press release. Adoption of the OX open source platform will help members of the SmartTV Alliance collaborate on the development of an Internet scale, interoperable security infrastructure, a goal everyone wants to achieve.
Gluu provides services to companies that want to use the OX platform: Design, Build, Operate, and Transfer (DBOT). We were able to help Toshiba engineers jumpstart their development effort and to provide some tactical feature enhancements in the open source project to support their rollout.
European Identity Award 2013 for „Best Innovation/New Standard in Information Security”: A new standard that rapidly gained momentum and plays a central role for future concepts of Identity Federation and Cloud Security.more
Special Award 2013 for „Bridging the organizational gap between Business and IT”: A project that was far above average when it comes to Business/IT Alignment, by successfully setting up a framework of guidelines and policies plus the required organizational entities and rolling this out into a global organization.more
European Identity Award 2013 in category „Best Access Governance and Intelligence Project”: Holistic IAM/IAG approach following new architectural concepts and enabling Dynamic Authorization Management based on business rules.more
Special Award 2013 for „Rapid Re-Design and Re-Implementation of the Entire IAM”: Moving from a traditional, Active Directory-centric environment to full HR integration on a global scale and full support for automated provisioning, based on a clearly defined roadmap for further improvement.more
European Identity Award 2013 in category „Best Access Governance and Intelligence Project”: Implementing cross-divisional SoD rules on a global scale at business level, with full integration into the existing Access Governance solution.more
Am heutigen Abend verlieh die Analystengruppe KuppingerCole im Rahmen der siebten European Identity & Cloud Conference (EIC) in unterschiedlichen Kategorien den European Identity & Cloud Award 2013. Dieser Award zeichnet herausragende Projekte und Initiativen in den Bereichen Identity & Access Management (IAM), GRC (Governance, Risk Management and Compliance) und Cloud Security aus. Nominiert waren zahlreiche Projekte, die im Laufe der letzten 12 Monate von Anwenderunternehmen und Herstellern...more
Google All Access. "Radio Without Rules", Music streaming $9.99 pm, $7.99 pm early adopters. 30 days free. An extension to Google Music allowing clever playlists and instant access to any track in Google's library. Along with some more smarts for exploring based on your listening and library habits. USA Today. Other countries rolling out "soon".
No thanks. I've already got 30k tracks in my personal collection.
[from: Google+ Posts
May 14, 2013
Social media is fast becoming the identity mechanism of choice to log into popular sites and company information. Looking to find the right music on Spotify? Want to connect with the world’s professionals on LinkedIn? You can now simply log in via your Facebook account. The UK Government may even soon allow citizens to use their social media identity to access public services safely and securely...
On May 7, Andras Cser of Forrester Research, Inc. posted a thought-provoking blog entry entitled “XACML is Dead” which postulated that there wasn’t any future for XACML.
At CA Technologies we have long supported a broad range of industry standards such as LDAP, X.509, WS-Federation, SAML, WS-Security, REST, SPML as well as more recent standards like OpenID, OpenID Connect and OAuth, thereby...
Summer is coming, which means the hurricane, tornado season is here. Do you have a contingency plan for your critical IT infrastructure? If so, is…
I’ve posted the OpenID Connect Update presentation that I gave today during the OpenID Workshop at the European Identity and Cloud Conference. It’s available in PowerPoint and PDF formats.
Big Data is characterized by three properties: there is now an enormous quantity of data which exists in a wide variety of forms and is being generated very quickly. However, the term “Big Data” is as much a reflection of the limitations of the current technology as it is a statement on the quantity, speed or variety of data. The term Big Data needs to be understood as data which has greater quantity, variety or speed than can be comfortably processed using the technology that...
Big Data provides many opportunities to solve emerging business challenges and Big Data technologies can create business value. However Big Data also creates security challenges that need to be considered by organizations adopting or using Big Data techniques and technologies. This paper outlines the information security risks involved in Big Data and recommends the responses to these based on the concepts of information stewardship and information centric security...more
Life Management Platforms will change the way individuals deal with sensitive information like their health data, insurance data, and many other types of information – information that today frequently is paper-based or, when it comes to personal opinions, only in the mind of the individuals. They will enable new approaches for privacy and security-aware sharing of that information, without the risk of losing control of that information. A key concept is “informed pull”...
In his article “ArchiMate from a data modelling perspective” Bas van Gils from BiZZdesign talks about the difference between conceptual, logical and physical levels of abstraction. This distinction is very often used in (enterprise) IT architecture but is often also poorly understood, defined or applied.
Bas refers to the TOGAF/IAF definitions:
TOGAF seems to follow the interpretation close to Capgemini’s IAF where conceptual is about “what”, logical is about “how” and physical is about “with what”. In that case, conceptual/logical appears to map on the architecture level, whereas physical seems to map on the design/ implementation level. All three are somewhat in line but in practice we still see people mix-and-match between abstraction levels.
I am not a fan of the above. It is one of those definitions that tries to explain a concept by using specific words in the hope to evoke a shared emotion. Needless to say, this type of definition is at the heart of many open ended and often very emotional online discussions.
Conceptual, logical and physical are most often related to the idealization – realization spectrum of abstraction. This spectrum abstracts ‘things’ by removing elements relating to the realization of the ‘thing’. Opposite, the spectrum elaborates ‘things’ by adding elements related to a specific realization. You can say that a conceptual model contains less elements related to a realization compared to a logical model. You can also say that a physical model contains more elements related to a realization when compared to a logical model.
In other words, conceptual, logical and physical are relative to each other. They don’t point to a specific abstraction. For that you need to specify more information on exactly what kind of elements of realizations you want to abstract away at each level of abstraction.
The most commonly used reference model for using these three levels is as follows:
- Conceptual. All elements related to an implementation with an Information System are abstracted away.
- Logical. A realization with an Information System is not abstracted away anymore. All elements related to a technical implementation of this Information System are abstracted away.
- Physical. A technical realization is assumed and not abstracted away anymore.
That is the only way to define the levels conceptual, logical and physical: define what type of realization-related elements are abstracted away at each level. You can never assume everyone uses the same reference model. You either pick an existing one (e.g. Zachman Framework) or define your own.
Saying that conceptual is “what”, logical is “how” and physical is “with what” is confusing to say the least. Especially if you know that in the Zachman Framework “how” and “what” are even orthogonal to “conceptual” and “logical”.
At first it is not easy to define a conceptual model without referring to an Information System. For instance any referral to lists, reports or querying assumes an Information System and is in fact already at the logical model.
A misunderstanding I often hear is that people assume that conceptual means (a lot) less detail compared to logical. That’s not true. A conceptual model can consist of as many models and pages of text as a logical model. In reality, conceptual models are often more limited but I only have to point to the many failed IT projects due to too little detail at the conceptual model. It’s just wrong.
Which type of glasses do you prefer?
I love this photo of Chris Cassidy, one of our great NASA astronauts, at work.
The NASA web site explains:
Repairing the Station in Orbit Expedition 35 Flight Engineers Chris Cassidy (pictured) and Tom Marshburn (out of frame) completed a spacewalk at 2:14 p.m. EDT May 11, 2013 to inspect and replace a pump controller box on the International Space Station’s far port truss (P6) leaking ammonia coolant. The two NASA astronauts began the 5-hour, 30-minute spacewalk at 8:44 a.m.
A leak of ammonia coolant from the area near or at the location of a Pump and Flow Control Subassembly was detected on Thursday, May 9, prompting engineers and flight controllers to begin plans to support the spacewalk. The device contains the mechanical systems that drive the cooling functions for the port truss.
What a thrill it must be for these guys!
May 13, 2013
Over on the Forrester blogs, I take a look at XACML, advocating that it needs to refactor heavily to meet mobile/cloud authorization policy needs. UMA as a potential enterprise “access management 2.0″ solution makes an appearance as well. Quoting the post: “Would an XACML.next that concentrates on ‘growing the pie’ for declarative authorization policy be valuable? Would an integration of web and post-web access management help you achieve your goals?” If you have thoughts on this, check out the post and let me know…
As in the past years, KuppingerCole has worked out the Top Trends in IAM/IAG (Identity and Access Management/Governance), Cloud Computing, and Information Protection and Privacy. The most important trends are the massive increase in demand for support of the “Extended Enterprise” in IAM/IAG, the cloud stratification in various layers, increasing threats imposed by the rise of cybercrime, and the emergence of Life Management Platforms. In the following sections, we name the five...
Identity and Access Management (IAM) is a holistic approach to managing identities (both internal and external) and their access within an organisational framework. The key benefit to the business should be to enable people to do their jobs more effectively. If deployed correctly, IAM can help achieve this in a multitude of different ways for different departments and roles within them; internal staff and external partners and customers. However, this also makes it a complex issue which...
The “good old days” are gone forever.
Those were the days when IT environments were more predictable and easier to control. The user population and their access patterns were more easily defined. Stick a firewall in front of key systems, create some controls around who can access what, and you’re done.
The world is far different now. The headlong march towards the cloud has made the...
Andras Cser probed a sore spot in IAM last week with his post, "XACML Is Dead." It's a necessary conversation (though I did see a glint in his eye at the Forrester BT Forum after he pressed Publish!). Our Q3 2012 Identity Standards TechRadar showed that XACML has already crested the peak of its moderate success trajectory, heading for decline. We haven't seen its business value-add or ecosystem grow since then, despite the publication of XACML 3.0 and a few other bright spots, such as Axiomatics' recent funding round.
It's not that we don't need an interoperable solution for finer-grained access control. But the world's demands for loosely coupled identity and access systems have gotten...well, more demanding. The solution needs to be friendly to open web API security and management. It needs to be friendly to mobile developers. And it most certainly needs to be prepared to tackle the hard parts of integrating authorization with truly heterogeneous cloud services and applications, where business partners aren't just enterprise clones, but may be tiny and resource-strapped. This admittedly gets into business rather than technical challenges, but every ounce of technical friction makes success in the business realm less likely.Read more
In Martin Kuppinger
Since my colleague Craig Burton has declared that SAML is dead, it seems to be in vogue among analysts to take the role of the public medical officer and to diagnose the death of standards or even IAM (Identity and Access Management) in general. Admittedly, the latter case was not about diagnosing the death but proposing to kill IAM, but that does not change much. The newest in this series of dead bodies is XACML, according to another Industry Analyst. So we are surrounded by dead corpses now, or maybe by living zombies. But is that really true? My colleague Craig Burton titled his blog – for a very good reason – “SAML is Dead! Long Live SAML!” That is fundamentally different from saying “XACML is dead”.
There are a lot of good answers from experts such as Ian Glazer, Gerry Gebel (OK, he might be a little biased being the President of Axiomatics Americas), or Danny Thorpe.
I am clearly not suspicious being the enthusiastic XACML evangelist wearing blinders. Just ask some of the Axiomatics guys – we had many controversial discussions over the years. However, for me it is clear that neither Dynamic Authorization Management in general nor XACML in particular are dead.
What puzzled me most in this blog post was that part of the initial sentence:
XACML … is largely dead or will be transformed into access control
OK, “access control”. XACML is access control. Access control is everything around authentication and authorization. So what does this mean? I just do not understand that sentence, sorry. XACML is a part of the overall Access Control story.
From my perspective, the two most important concepts within access control are Dynamic Authorization Management and Risk-/Context-Based Access Control (i.e. both Authentication and Authorization). The latter only will work with Dynamic Authorization Management in place. When we know about the context and the risk and make authorization decisions based on that, then we need systems that externalize authorization and rely on rules that can take the context into account.
The challenge with Dynamic Authorization Management, i.e. technologies implemented in a variety of products such as the Axiomatics Policy Server, the Oracle Entitlements Server, the IBM Security Policy Manager, Quest APS, and many others, is that it requires changes in both application code and the mindset of software developers and architects. That is a long journey. On the other hand we see some increase in acceptance and use of such technologies. Notably, Dynamic Authorization Management is not new. You will find such concepts dating back to the mid ‘70s in mainframe environments, and IBM’s good old RACF can be consider an early example for that.
You still can argue that Dynamic Authorization Management is alive but XACML as the most important standard around it is dead. There are good arguments against that, and I will not repeat what the others mentioned above have said. You might discuss where to use XACML and where to rely on proprietary technology. However, do you really want to lock in your entire application landscape into a proprietary Dynamic Authorization Management technology of a single vendor? That would be a nightmare. You need to isolate your applications from the Dynamic Authorization Management system in use, and a standard helps in doing that. Just think about being locked into proprietary interfaces for all of your applications using a specific Dynamic Authorization Management system for the next 30, 40 or more years.
XACML even is the better choice for COTS applications. They can rely on a standard, instead of every vendor building proprietary connectors. Most vendors will do that for Microsoft SharePoint, because SharePoint is so important. But that is the exception, not the rule. And deducing from the fact that vendors support SharePoint with proprietary interfaces (instead of using XACML) that XACML is dead is just a wrong deduction. The problem in that case is not XACML but the SharePoint security model that clearly is not the best I have ever seen (to say the least). XACML is of value. Standards are of value. And I believe you would need much better reasons to diagnose the death of standards.
To learn more about the real trends in IAM, IAG, Cloud Security, and many other topics, just visit the EIC 2013 that starts on Tuesday, May 14th.
The ready availability of cloud services has made it easy for employees and associates to obtain and use these services without consideration of the potential impact on the organization. Therefore, in order to ensure good governance over the use of cloud services, it is imperative that organizations create and communicate a policy for their acquisition and use. This should be supported by a simple, fast and reliable risk based process for cloud service procurement and complemented by...
Most large organizations and a significant number of medium-sized organizations have heavily invested in IAM (Identity and Access Management) and IAG (Identity and Access Governance) during the past few years. Some projects went well; others did not deliver as expected. But even organizations that run successful IAM/IAG projects are challenged by new evolutions, such as the increasing relevance of the “Computing Troika” of Cloud Computing, Mobile Computing, and Social Computing...
May 12, 2013
Identity, authentication, attribute management and authorization domain experts tend to seek clear distinctions between each of those facets. The operational folks who actually deal with these issues often blur the boundaries between them. This blog post shows an example of laying out access control use cases from an operational perspective that I found rather educational.
With the current buzz around mobility and BYOD, there is sometimes a belief that the infrastructure and choices that exist today will have to be completely re-done in order to accommodate new devices. While I am not sure about that, I recently saw a public NASA ICAM presentation that outlined a framework for how to look at access control from an operational perspective that I found relevant.
I've kept the concept, but changed some of the details for the sake of clarity:
The key to the above visualization is to know that no one does credentialing and authentication for its own sake but as a means to an end to manage access to a system or resource. From an operational perspective, it allows for calling out an end to end process using natural language; "A person who is anonymous, using an organization managed PC, on the organization's network, wants to access administrator level functions during normal business hours".
You can then lay out the use case variations using a tabular format:
|Use Case||Applicability||Priority||Criteria A|
It immediately gives you a way to articulate possibilities that may or may not apply to you; What if it was a Smartphone instead of the PC? What if the connection is from the Internet? etc. It also provides you insights into what aspects change, what aspects still remain the same.
Do you have any pointers to frameworks like these that help to clarify choices people need to make regarding access controls?
These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer
May 10, 2013
Read this and be inspired. http://www.electricbike.com/dogmans-tale/
Dogman's Tale »
When I started chatting on the internet, I wanted a screen name to protect my identity. So I chose Dogman because I am part dog, I speak dog fluently, and I have always had a pack of dogs around ...