October 22, 2014

Julian BondI want to buy a final edition 7th gen iPod Classic 160 with the v2.0.5 firmware in the UK. I don't mind... [Technorati links]

October 22, 2014 02:31 PM
I want to buy a final edition 7th gen iPod Classic 160 with the v2.0.5 firmware in the UK. I don't mind a few scratches as long as the display is still ok. Anyone?

I need a 7th generation Classic 160 because this went back to a single platter drive and there's a 240Gb disk that fits and works. The previous 6th Gen 160 (which I have) used a dual platter drive with an unusual interface and can't be upgraded.

About 2 years ago, the final 7th Gen Classic iPod was upgraded with slightly different hardware that worked with the final v2.0.5 firmware. If it came with 2.0.4 then it probably can't be upgraded. 2.0.5 is desirable because there's a software setting to disable the EU volume limit. When Apple did all this, they didn't actually update the product codes or SKU# So People will claim they have a 7th Gen MC297QB/A or MC297LL/A and it might or might not be the right one. The only way to be sure is to try and update to 2.0.5

So at the moment I'm chasing several on eBay but having to wait for the sellers to confirm what they're actually selling before putting in bids and losing out. Apparently I'm not alone as prices are rising. The few remaining brand new ones are quoted on "Buy Now" prices at a premium, sometimes twice the final RRP. Gasp!

I f***ing hate Apple for playing all these games. I hate them for discontinuing the Classic 160. I hate that there's no real alternative.

1st world problems, eh? It seems like just recently I keep running up against this. I'm constantly off balance because things I thought were sorted and worked OK, are no longer available. Or the company's gone bust or been taken over. Or the product has been updated and what was good is now rubbish. Or the product is OK, but nobody actually stocks the whole range so you have to buy it on trust over the net.
[from: Google+ Posts]

Julian BondAphex Twin leaking a fake version of Syro a few weeks before the official release was genius. It's spread... [Technorati links]

October 22, 2014 08:55 AM
Aphex Twin leaking a fake version of Syro a few weeks before the official release was genius. It's spread all over the file sharing sites so it's hard to find the real release. The file names match[1]. The music is believeable but deliberately lacks lustre. It's really a brilliant pastiche of an Aphex Twin album as if some Russian producer has gone out of their way to make an homage example of what they thought Aphex Twin was doing.

http://www.electronicbeats.net/en/features/reviews/the-fake-aphex-twin-leak-is-a-hyperreal-conundrum/

[1] What's a bit weird though is that the MP3 files not only have the same filenames but seem to be the same size and have the same checksum.
 EB Reviews: Fake 'Syro' – Electronic Beats »
Am I one of those people who can't tell the difference between authentic Aphex tunes and sneering knockoffs?

[from: Google+ Posts]
October 21, 2014

CourionData Breach? Just Tell It Like It Is [Technorati links]

October 21, 2014 10:25 PM

Access Risk Management Blog | Courion

Wired Innovation InsightsKurt Johnson, Vice President of Strategy and Corporate Development, has posted a blog on Wired Innovations Insight titled, Data Breach? Just Tell It Like It Is.

In the post, Kurt discusses the negative PR implications of delayed breach disclosure and recommends improving your breach deterrence and detection capabilities by continuously monitoring identity and access activity for anomalous patterns and problems, such as orphan accounts, duties that need to be segregated, ill-conceived provisioning or just unusual activity.

Read the full post now.

blog.courion.com

Neil Wilson - UnboundIDUnboundID LDAP SDK for Java 2.3.7 [Technorati links]

October 21, 2014 05:32 PM

We have just released the 2.3.7 version of the UnboundID LDAP SDK for Java. You can get the latest release online at the UnboundID Website, the SourceForge project page, or in the Maven Central Repository.

Complete release note information is available online at on the UnboundID website, but some of the most significant changes include:

OpenID.netNotice of Vote for Errata to OpenID Connect Specifications [Technorati links]

October 21, 2014 05:43 AM

The official voting period will be between Friday, October 31 and Friday, November 7, 2014, following the 45 day review of the specifications. For the convenience of members, voting will actually open a week before Friday, October 31 on Friday, October 24 for members who have completed their reviews by then, with the voting period still ending on Friday, November 7, 2014.

If you’re not already a member, or if your membership has expired, please consider joining to participate in the approval vote. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration.
A description of OpenID Connect can be found at http://openid.net/connect/. The working group page is http://openid.net/wg/connect/.

The vote will be conducted at https://openid.net/foundation/members/polls/86.

– Michael B. Jones, OpenID Foundation Secretary

OpenID.netNotice of Vote for Implementer’s Draft of OpenID 2.0 to OpenID Connect Migration Specification [Technorati links]

October 21, 2014 05:38 AM

The official voting period will be between Friday, October 31 and Friday, November 7, 2014, following the 45 day review of the specification. For the convenience of members, voting will actually open a week before Friday, October 31 on Friday, October 24 for members who have completed their reviews by then, with the voting period still ending on Friday, November 7, 2014.

If you’re not already a member, or if your membership has expired, please consider joining to participate in the approval vote. Information on joining the OpenID Foundation can be found at https://openid.net/foundation/members/registration.

A description of OpenID Connect can be found at http://openid.net/connect/. The working group page is http://openid.net/wg/connect/.

The vote will be conducted at https://openid.net/foundation/members/polls/81.

– Michael B. Jones, OpenID Foundation Secretary

October 20, 2014

Radovan Semančík - nLightProject Provisioning with midPoint [Technorati links]

October 20, 2014 01:09 PM

Evolveum midPoint is a very unique Identity Management (IDM) system. MidPoint is a robust open source provisioning solution. Being an open source the midPoint is developed in a fairly rapid, incremental and iterative fashion. And the recent version introduced a capability that allows midPoint to reach beyond the traditional realm of identity management.

Of course, midPoint is great in managing and synchronizing all types of identities: employees, contractors, temporary workers, customers, prospects, students, volunteers - you name it, midPoint does it. MidPoint can also manage and synchronize functional organizational structure: divisions, departments, sections, etc. Even though midPoint does it better than most other IDM systems these features are not exactly unique just by themselves. What is unique about midPoint is that midPoint refines these mechanisms into generic and reusable concepts. MidPoint mechanisms are carefully designed to work together. This makes midPoint much more than just a sum of its parts.

One interesting consequence of this development approach is unique ability to provision projects. We all know the new lean project-oriented enterprises. The importance of traditional tree-like functional organizational structure is diminished and flat project-based organizational structure takes the lead. Projects govern almost any aspect of the company life from development of a new groundbreaking product to the refurbishing an office space. The projects are created, modified and closed almost on a daily basis. This is usually done manually by system administrators: create a shared folder on a file server, set up proper access control lists, create a distribution list, add members, create new project group in Active Directory, add members, create an entry in the task tracking system, bind it with the just-created group in Active Directory ... It all takes a couple of days or weeks to be done. This is not very lean, is it?

MidPoint can easily automate this process. Projects are yet another type of organizational units that midPoint manages. MidPoint can maintain an arbitrary number of parallel organizational structures. Therefore adding an orthogonal project-based structure to existing midPoint deployment is a piece of cake. MidPoint also supports membership of a single user in arbitrary number of organizational units therefore this efficiently creates a matrix organizational structure. As midPoint projects are just organizational units they can easily be synchronized with other systems. MidPoint can be configured to automatically create proper groups, distribution lists and entries in the target systems. And as midPoint knows who are the members of the project it can also automatically add correct accounts to the groups it has just created.

Project provisioning

However, this example is just too easy. MidPoint can do much more. And anyway, the modern leading-edge lean progressive organizations are not only project-based but also customer-oriented. The usual requirement is not only to support internal project but especially the customer-facing projects. Therefore I have prepared a midPoint configuration that illustrates automated provisioning of such customer projects.

The following screenshot illustrates the organizational structure maintained in midPoint. It shows a project named Advanced World Domination Program (or AWDP for short). The project is implemented for a customer ACME, Inc.. The project members are jack and will. You can also see another customer and a couple of other projects there. The tabs also shows different (parallel) organizational structures maintained by midPoint. But now we only care about the Customers structure.

Project provisioning: midPoint

MidPoint is configured to replicate the customer organizational unit to LDAP. Therefore it will create entry ou=ACME,ou=customers,dc=example,dc=com. It is also configured to synchronize the project organizational unit to the LDAP as an ldap group. Therefore it will create LDAP entry cn=AWDP,ou=ACME,ou=customers,dc=example,dc=com with a proper groupOfNames object class. MidPoint is also configured to translate project membership in its own database into a group membership in LDAP. Therefore the cn=AWDP,... group contains members uid=jack,... and uid=will,....

Project provisioning: LDAP

Similar configuration is used to synchronize the projects to GitLab. The customers are transformed to GitLab groups and GitLab projects are created in accord with midPoint projects.

Project provisioning: GitLab

... and project members are correctly set up:

Project provisioning: GitLab

All of this is achieved by using a relatively simple configuration. The configuration consist of just four XML files: one file to define access to each resource, one file to define customer meta-role and one for customer project meta-role. No custom code is used except for a couple of simple scriptlets that count just several lines of Groovy. Therefore this configuration is easily maintainable and upgradeable.

And midPoint can do even more still. As midPoint has a built-in workflow engine it can easily handle project membership requests and approvals. MidPoint has a rich delegated administration capability therefore management of project information can be delegated to project managers. MidPoint synchronization is bi-directional and flexible one system can be authoritative for some part of project data (e.g. scope and description) and another system for a different type of data (e.g. membership). And so on. The possibilities are countless.

This feature is must have for any lean project-oriented organization. Managing projects manually is the 20th-century way of doing things and it is a waste of precious resources. Finally midPoint is here to bring project provisioning into the 21st century. And it does it cleanly, elegantly and efficiently.

(Reposted from https://www.evolveum.com/project-provisioning-midpoint/)

Vittorio Bertocci - MicrosoftTechEd Europe, and a Quick Jaunt in UK & Netherlands [Technorati links]

October 20, 2014 08:07 AM

TEEU_2014_I’m speaking_1

I can’t believe TechEd Europe is already a mere week away!

This year I have just 1 session, but TONS of new stuff to cover… including news we didn’t break yet! 75 mins will be very tight, but I’ll do my best to fit everything in. If you want to put it in the calendar, here there are the coordinates:

DEV-B322 Building Web Apps and Mobile Apps Using Microsoft Azure Active Directory for Identity Management
Thursday, October 30 5:00 PM – 6:15 PM, Hall 8.1 Room I

As usual, I would be super happy to meet you during the event. I’m scheduled to arrive on Monday evening, and I plan to stay in Barcelona until Friday morning – there’s plenty of time to sync up, if we plan in advance. Feel free to contact me here.
Also, I plan to be at ask the expert night – it will be right after my talk, hence that should provide a nice extension to the QA.

Given that I don’t often cross the pond anymore these days, the week after I’ll stick around and spend 3 days in UK (Reading) and 2 days in Amsterdam. (I’ll also spend the weekend in Italy, but I doubt you want to hear about identity development on a Saturday Smile). I believe the agenda is already pretty full, but if you are in the area and interested feel free to ping me – we can definitely check with the local teams if we can still find a slot.

Looking forward to meet and chat! Smile

October 18, 2014

Nat Sakimuraオンラインサービスにおける消費者のプライバシーに配慮した情報提供・説明のためのガイドラインを策定しました(METI/経済産業省) [Technorati links]

October 18, 2014 11:08 PM

検討委員会の委員としてお手伝いしたガイドラインが発表されました。

経産省「オンラインサービスにおける消費者のプライバシーに配慮した情報提供・説明のためのガイドライン
meti-20141017

これは、一昨年のIT融合フォーラム パーソナルデータワーキング・グループの検討結果と、それに引き続き昨年度行われた事業者事前相談の試行を通じて作成された『消費者への情報提供・説明を充実させるための「基準」』を受けて策定されたものです。消費者からパーソナルデータの提供を受ける場合に、どのように通知したらよいか、目的や提供範囲の変更に際してはどうすべきか、などをまとめています。

このような取組は各国で始まっており、特にオンラインの場合はあまりバラバラになると事業者の対応が大変になるので、国際的な調和も求められます。その一環として、ISO/IECにもこれを提出する予定になっており、10月20日から始まるISO/IEC JTC1 SC 27/WG 5 メキシコ会合で、Study Periodの提案が行われることになっています。

この辺りはまた別途ご報告もうしあげます。

(メキシコシティにて)

 

 

 

Nat Sakimura消費者の金融取引の安全性向上のための大統領令発布 – クレジットカードのICカード化や政府サイトの多要素認証対応など [Technorati links]

October 18, 2014 10:29 PM

2014年10月17日付で、消費者の金融取引の安全性向上のための大統領令[1]が発布されました。

Executive Order   Improving the Security of Consumer Financial Transactions   The White House

主な内容は以下の3つです。

日本や欧州では、クレジットカードはICチップ付きが主流になっていますが、米国ではまだまだ普及には程遠い状況です。この大統領令のSection 1は、この状況を改善するためのものです。政府機関でのクレジットカードの決済がICチップベースになることで、クレジットカード発行者がICチップ付きのものを発行するようになるための呼び水となることを狙っています。

おりしも同日 Daily Telegraph に

Sorry Mr President; your credit card has been declined
Barack Obama’s card rejected at trendy New York restaurant Estela

という記事が出ました。オバマ大統領がニューヨークのEstelaというレストランで支払いをしようとしたら支払いができなかったという記事です。磁気ストライプだと複製が簡単なので、クレジットカード会社は過去の取引のパターンを使ったリスクベース認証を行っているのですが、オバマ氏は大統領になってからほとんどカードを切ることが無くなったので、このレストランでの支払いが異常な取引としてフラグが上がってしまったわけですね。同記事曰く、

「どうもあんまりカードを使わないものだから、不正行為が行われていると思われたようだね。ミッシェルがカードを持っていて良かったよ。」

「ウェイトレスに、これまでちゃんと支払いをしていると説明したんだけどね。こんなことになってしまった。」

オバマ大統領はロブ・コードレイ米消費者金融保護局長に、クレジットカード顧客を保護するためのもっと簡便な方法が導入される必要が有ることを、この事例は指し示していると語った。

彼は、欧州では普通になっているのに米国ではそうではないICカード決済システム[3]を褒め称えた。

(出所)Rosa Prince: “Sorry Mr President; your credit card has been declined”, Daily Telegraph, 2014/10/17

仕込み記事乙、という感じでありますが、それだけ本気ということでしょう。ちなみに、この話は、AP電/Fox Newsなんかにも出ています。セキュリティの話しじゃみんな読まないけど、オバマ大統領がカード使えなかった!という俗な話にすればみんな読むだろうという読みのもと、メディア戦略うまいですね。

アイデンティティ窃盗はだいぶ前から社会問題になっていました。Section 2は、それに対する対策ですね。具体的な対策と言うよりは、対策を立てなさいという命令ですが。

そしてSection 3.が、政府サイトへのアクセスで、本人が個人情報にアクセスしたりするときのセキュリティレベルを上げ、それによってプライバシーの保護を向上させるというものです。ホワイトハウス筋から事前に聞いていたところによると、これもSection 1.と同じで、民間に対する呼び水にすることを狙っているそうです。今後、これに対応するためにSP800-63の改定もあり得るようです[4]。18ヶ月と切ってあるのは、FCCXのインプリがそれまでに済むということですかね。

ちょうど発表が私の東京→メキシコの移動に重なってしまって、記事を書くのがちょっと遅くなりましたが、まだ日本ではこれが第一報になるのかな…。

ではでは!

(メキシコシティにて)


 

[1] Executive Order –Improving the Security of Consumer Financial Transactions, http://www.whitehouse.gov/the-press-office/2014/10/17/executive-order-improving-security-consumer-financial-transactions

[2] National Strategy for Trusted Identity in Cyberspace

[3] chip-and-pin payment system

[4] 現行のSP800-63だと多要素認証はLoA3になるが、LoA3の身元確認を要求するのは多分酷なので、LoA2で多要素認証を要求するようになるとか、あるいは、クレデンシャルのレベルと身元確認のレベルを分離させるとかするんじゃないでしょうか。

【関連記事】

Anil JohnA Simple Framework for Trusted Identities [Technorati links]

October 18, 2014 10:15 AM

What does it take to enable a person to say who they are in the digital world while having the same confidence, protections and rights that they expect in the real world? This guest post by Tim Bouma explores the question in a manner that is relevant across jurisdictions, independent of delivery channels and technology neutral.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.


These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

Mike Jones - MicrosoftJOSE -35 and JWT -29 drafts addressing AppsDir review comments [Technorati links]

October 18, 2014 01:29 AM

IETF logoI’ve posted updated JOSE and JWT drafts that address the Applications Area Directorate review comments. Thanks to Ray Polk and Carsten Bormann for their useful reviews. No breaking changes were made.

The specifications are available at:

HTML formatted versions are available at:

October 17, 2014

Kantara InitiativeKantara Initiative Helps Accelerate Time-to-Market for Digital Citizen Services [Technorati links]

October 17, 2014 03:17 PM

Premier US Approved Trust Framework Provider (TFP) supports the Presidential Executive Order and the vision of the US National Strategy for Trusted Identities in Cyberspace

Piscataway, NJ, October 17, 2014 – Kantara Initiative, the premier US ICAM Approved Trust Framework Provider (TFP) approving 3rd party Credential Service Providers (CSPs), is positioned to support today’s Presidential Executive Order and the vision of the US National Strategy for Trusted Identities in Cyberspace (NSTIC).

Kantara Initiative helps to connect and accelerate identity services time-to-maket by enabling trusted on-line transactions through our innovations and compliance programs.

Kantara’s Identity Assurance Program provides technical and policy approvals of businesses that want to connect to US Federal Government Agencies and commercial markets, ensuring companies like CA Technologies, Experian, ForgeRock, and Radiant Logic can offer compelling new government services while also safely and securely complying with government technical rules and regulations.

Board of Trustee President, Allan Foster (ForgeRock) says, “ForgeRock is thrilled to support the US Presidential Executive Order to drive identity services innovation. Kantara Members are positioned at the strategic intersection where relationship based identity services meet trust and usability.”

“As the first 3rd party assessed Kantara Approved Identity Proofing Service Component, Experian continues to innovate using its proven industry position to deliver identity proofing services at the highest industry standard,” said Kantara Board of Trustee Vice President Philippe de Raet (Experian)

Kantara Trustee Members IEEE-SA, Internet Society, and NRI provide a solid foundation of expertise developing open standards that connect unique opportunities via shared resources and know-how.

“The Internet Society is pleased to see this commitment, at the highest level, to steps that will drive the digital identity infrastructure to mature more rapidly, with a focus on security and authentication that moves us beyond the era of user IDs and passwords,” said Kantara Board Member Robin Wilton (ISOC).

Join Kantara Initiative to influence strategy and grow markets where business, research and innovation intersect.

About Kantara Initiative: Kantara Initiative accelerates identity services markets by developing innovations and programs to support trusted on-line transactions. The membership of Kantara Initiative includes international communities, industry, research & education, and government stakeholders. Join. Innovate. Trust. http://kantarainitiative.org

 

 

 

 

OpenID.netThe Name is the Thing: “The ARPU of Identity” [Technorati links]

October 17, 2014 01:49 PM

The name is the thing. The name of this Open Identity Exchange White Paper, the “ARPU of Identity”, is deliberate. ARPU, Average Revenue Per User, is one metric telcos use to measure success. By deliberately using a traditional lens that telcos use, this paper puts emerging Internet identity markets into a pragmatic perspective. The focus of the white paper is on how mobile network operators (MNOs) and other telcos can become more involved in the identity ecosystem and thereby improve their average revenue per user, or ARPU. This perspective continues OIX’s “Economics of Identity” series, or as some call it the “how do we make money in identity” tour in the emerging Internet identity ecosystem. OIX commissioned a white paper reporting the first quantitative analysis of Internet identity market in the UK, where HMG Cabinet Office hosted workshops on the topic at KPMG’s headquarters in London and at the University of Washington’s Gates Center in Seattle.

The timing of this paper on business interoperability is coincidental with work groups in the OpenID Foundation developing the open standards that MNOs and other telco players will use to ensure technical interoperability. GSMA’s leadership with OIX on pilots in the UK Cabinet Office Identity Assurance Program and in the National Strategy on Trusted identity in Cyberspace offer opportunities to test both business and technical interoperability leveraging open standards built on OpenID Connect. The timing is the thing. The coincidence of white papers, workshops and pilots in the US, UK and Canada with leading MNOs provides a real-time opportunity for telcos to unlock their unique assets to increase ARPU and protect the security and privacy of their subscribers/citizen.

In my OpenID Foundation blog, I referenced Crossing the Chasm, where Geoffrey A. Moore argues there is a chasm between future interoperability that technology experts build into standards and the pragmatic expectations of the early majority. OIX White Papers, workshops and pilots help build the technology tools and governance rules needed for the interoperability to successfully cross the “chasm.”

Several OIX White Papers speak to the “supply side” how MNOs and others can become Identity Providers (IDPs), Attribute or Signal Providers in Internet identity markets. Our next OIX White Paper borrows an industry meme (and T-Shirt) for its title, “There’s No Party Like A Relying Party”. That paper speaks to the demand side. Relying Parties, (RPs) like banks, retailers and others rely on identity attributes and account signals to better serve and secure customers and their accounts rely on technical, business and legal interoperability.

By looking at the “flip sides” of supply and demand, OIX White Papers help us better understand the ARPU, the needs for privacy and security and the economics of identity.

Don

OpenID.netCrossing the Chasm of Consumer Consent [Technorati links]

October 17, 2014 01:47 PM

This week Open Identity Exchange publishes a white paper on the “ARPU of Identity”.   The focus of the white paper is on how MNOs and telecommunications companies can monetize identity markets and thereby improve their average revenue per user, or ARPU.   Its author and highly regarded data scientist, Scott Rice, makes a point that caught my eye. It’s the difficulty in federating identity systems because consumer consent requirements and implementations vary widely and are a long way from being interoperable. It got my attention because Open Identity Exchange and the GSMA lead pilots in the US and UK with leading MNOs with funding in part from government. The National Strategy on Trusted identity in Cyberspace and UK Cabinet Office Identity Assurance Program are helping fund pilots that may address these issues. Notice and consent involves a governmental interest in protecting the security and privacy of its citizens online. It’s a natural place for the private sector to leverage the public-private partnerships Open Identity Exchange has helped lead.

Notice and consent laws have been around for years.  The Organization for Economic Co-operation and Development, or OECD, first published their seminal seven Privacy Guidelines in 1980.  But in 1980, there was no world wide web nor cell phone.  Credit bureaus, as we know them today, didn’t exist; no “big data” or data brokers collecting millions of data points on billions of people.  What privacy law protected then was very different than what it needs to protect now.  Back then, strategies to protect consumers were based on the assumption of a few transactions each month, not a few transactions a day.  OECD guidelines haven’t changed in the last 34 years. Privacy regulations and, specifically, the notice and consent requirements of those laws lag further and further behind today’s technology.

In 2013 (and updated in March of this year), OIX Board Member company Microsoft, and Oxford University’s Oxford Internet Institute (OII) published a report outlining recommendations for revising the 1980 OECD Guidelines.  Their report makes recommendations for rethinking how consent should be managed in the internet age.  It makes the point that expecting data subjects to manage all the notice and consent duties of their digital lives in circa 2014 is unrealistic if we’re using rules developed in 1980.  We live in an era where technology tools and governance rules assume the notice part of “notice and consent” requires the user to agree to a privacy policy.  The pragmatic choice is to trust our internet transactions to “trusted” Identity Providers (IDPs), Service Providers (SPs) and Relying Parties (RPs). The SPs, RPs, IDPs, government and academic organizations that make up the membership of Open Identity Exchange share at least one common goal: increasing the volume, velocity and variety of trusted transactions on the web.

The GSMA, Open Identity Exchange and OpenID Foundation are working on pilots with industry leading MNOs, IDPs and RPs to promote interoperability, federation, privacy and respect for the consumer information over which they steward.  The multiple industry sectors represented in OIX are building profiles to leverage the global adoption of open standards like Open ID Connect. Open identity standards and private sector led public-private partnership pilots help build the business, legal and technical interoperability needed to protect customers while also making the job of being a consumer easier.

Given the coincidence of pilots in the US, UK and Canada over the coming months, it is increasingly important to encourage government and industry leaders and privacy advocates to build on interoperability and standardization of consumer consent and privacy baked into standards like OpenID Connect brings to authentication.

Don

October 16, 2014

Kuppinger ColeIAM for the User: Achieving Quick-wins in IAM Projects [Technorati links]

October 16, 2014 07:35 PM
In KuppingerCole Podcasts

Many IAM projects struggle or even fail because demonstrating their benefit takes too long. Quick-wins that are visible to the end users are a key success factor for any IAM program. However, just showing quick-wins is not sufficient, unless there is a stable foundation for IAM delivered as result of the IAM project. Thus, building on an integrated suite that enables quick-wins through its features is a good approach for IAM projects.



Watch online

MythicsIt’s Time to Upgrade to Oracle Database 12c, Here is Why, and Here is How [Technorati links]

October 16, 2014 06:16 PM

Well, it's that time again, when the whole Oracle database community will be dealing with the questions around upgrading to Database 12c from 11g (and some…

Kuppinger ColeMobile, Cloud, and Active Directory [Technorati links]

October 16, 2014 04:56 PM
In Martin Kuppinger

Cloud IAM is moving forward. Even though there is no common understanding of which features are required, we see more and more vendors – both start-ups and vendors from the traditional field of IAM (Identity and Access Management) – entering that market. Aside from providing an alternative to established on-premise IAM/IAG, we also see a number of offerings that focus on adding new capabilities for managing external users (such as business partners and consumers) and their access to Cloud applications – a segment we call Cloud User and Access Management.

There are a number of expectations we have for such solutions. Besides answers on how to fulfill legal requirements regarding data protection laws, especially in the EU, there are a number of other requirements. The ability to manage external users and customers with flexible login schemes and self-registration, inbound federation of business partners and outbound federation to Cloud services, and a Single Sign-On (SSO) experience for users are among these. Another one is integration back to Microsoft Active Directory and other on-premise identity stores. In general, being good in hybrid environments will remain a key success factor and thus a requirement for such solutions in the long run.

One of the vendors that have entered the Cloud IAM market is Centrify. Many will know Centrify as a leading-edge vendor in Active Directory integration of UNIX, Linux, and Apple Macintosh systems. However, Centrify has grown beyond that market for quite a while, now offering both a broader approach to Privilege Management with its Server Suite and a Cloud User and Access Management solution with its User Suite.

In contrast to other players in the Cloud IAM market, Centrify takes a somewhat different approach. On one hand, they go well beyond Cloud-SSO and focus on strong integration with Microsoft Active Directory, including supporting Cloud-SSO via on-premise AD – not a surprise when viewing the company’s history. On the other hand, their primary focus is on the employees. Centrify User Suite extends the reach of IAM not only to the Cloud but also to mobile users.

This makes Centrify’s User Suite quite different from other offerings in the Cloud User and Access Management market. While they provide common capabilities such as SSO to all type of applications, integration with the Active Directory, capabilities for both strong authentication of external users, and provisioning to Cloud/SaaS applications, their primary focus is not on simply extending this to external users. Instead, Centrify puts its focus on extending their reach to supporting both Cloud and Mobile access, provided by a common platform, delivered as a Cloud service.

This approach is unique, but it makes perfect sense for organizations that want to open up their enterprises to both better support mobile users as well as to give easy access to Cloud applications. Centrify has strong capabilities in mobile management, providing a number of capabilities such as MDM (Mobile Device Management), mobile authentication, and integration with Container Management such as Samsung Knox. All mobile access is managed via consistent policies.

Centrify User Suite is somewhat different from the approach other vendors in the Cloud User and Access Management market took. However, it might be the single solution that fits best to the needs of customers, particularly when they are primarily looking at how to enable their employees for better mobile and Cloud access.

Matthew Gertner - AllPeersHow To Decide When It Has Come Time To Bug Out [Technorati links]

October 16, 2014 04:34 PM

When to bugoutThere are a variety of circumstances and situations which will require you to bug out of your home or temporary shelter. For those of you who are unfamiliar, the term bug out refers to when your position has been compromised and you need to move on. Although this term found it’s origins in army and soldier slang, it still rings true for emergency situations. If you and your family are held up after a natural disaster and the damage has been catastrophic, it may be time for you to move on and find a better place to stay. Considering a variety of factors, you will have to make the tough decision to leave your possessions and other valuables behind in favour of a safer environment. Because you can’t take everything with you, having a bug out bag (along with a first-aid kit and food stockpile) is essential for every family. You will need to consider the unique circumstances that surround your disaster to assess when the proper time to bug out is.

Has The Natural Disaster Caused Significant Damage To Your Home And Community

After any disaster, it’s your responsibility to step outside and assess the damage. In some places, tornados and hurricanes can level entire city blocks after a period of only two to five minutes. If your community and home have been destroyed and you’re able to recognize this from the safety of your hideaway, it may be time to bug out and find emergency shelters set up somewhere else. Connecting with other neighbors who are contemplating the same situation can give you a network of people to move with, therefore protecting yourselves from people who are in panic mode and looking to steal from families. By keeping your cool and calmly letting your family know of the situation up top, you can all come to a collective agreement about what your next steps should be.

Do You Feel Unsafe In Your Home Or Shelter

There will come times when you’ll need to bug out from shelters as well, which depends on a variety of different circumstances. If you’ve been held up at home for a while and more damage starts appearing, then it may be time for you to grab the kit and family and move everyone to a safer location. A home that has been destroyed above can still pose some serious threats, especially if your safe zone is located underground. Additional collapse of your home could trap you inside and rescue crews may not reach you for months on end. This is why it is crucial to make a decision early on, for the best possible reasons. Just because you are attached to your home doesn’t mean it’s the end of the world. Material possessions, just like homes, can be replaced. What matters is if your family is in immediate danger by staying in the same spot. Always consider bugging out a viable option.

For more information about bugout bags, visit uspreppers.com

 

OpenID.netCrossing the Chasm In Mobile Identity: OpenID Foundation’s Mobile Profile Working Group [Technorati links]

October 16, 2014 03:45 PM

Mobile Network Operators (MNOs) worldwide are in various stages of “crossing the chasm” in the Internet identity markets. As Geoffrey A. Moore noted in his seminal work, the most difficult step is making the transition between early adopters and pragmatists. The chasm crossing Moore refers to points to the bandwagon effect and the role standards play as market momentum builds.

MNOs are pragmatists. As they investigate becoming identity providers, open standards play a critical role in how they can best leverage their unique technical capabilities and interoperate with partners. The OpenID Foundation’s Mobile Profile Working Group aims to create a profile of OpenID Connect tailored to the specific needs of mobile networks and devices thus enabling usage of operator ID services in an interoperable way.

The Working Group starts with the challenge that OpenID Connect relies on the e-mail address to determine a user’s OpenID provider (OP). In the context of mobile identity, the mobile phone number or other suitable mobile network data are considered more appropriate. The working group will propose extensions to the OpenID discovery function to use this data to determine the operator’s OP, while taking care to protect data privacy, especially the mobile phone number. We are fortunate the working group is led by an expert in ‘crossing the chasm’ of email and phone number interoperability, Torsten Lodderstedt, Head of Development of Customer Platforms at Deutsche Telekom who is also an OpenID Foundation Board member.

The Working Group’s scope is global as geographic regions are typically served by multiple, independent mobile network operators including virtual network operators. The number of potential mobile OPs a particular relying party needs to setup a trust relationship with will likely be very high. The working group will propose an appropriate and efficient model for trust and client credential management based on existing OpenID Connect specifications. The Foundation is collaborating with the Open Identity Exchange to build a trust platform that combines the “rules and tools” necessary to ensure privacy, operational, and security requirements of all stakeholders.

Stakeholders, like service providers, may likely have different requirements regarding authentication transactions. The OpenID Connect profile will also define a set of authentication policies operator OP’s are recommended to implement and service providers can choose from.

This working group has been setup in cooperation with OpenID Foundation member, the GSMA, to coordinate with the GSMA’s mobile connect project. We are fortunate that David Pollington, Senior Director of Technology at GSMA, and his colleagues have been key contributors to the Working Group’s charter and will ensure close collaboration with GSMA members. There is an importance coincidence of the GSMA and OIX joint leadership of mobile identity pilots with leading MNOs in the US and UK. All intermediary working group results will be proposed to this project and participating operators for adoption (e.g. in pilots) but can also be adopted by any other interested parties. The OIX and GSMA pilots in the US and UK can importantly inform the OIDF work group standards development process. That work on technical interoperability is complemented by work on “business interoperability.” OIX will publish a white paper tomorrow, “The ARPU of Identity”, that speaks to the business challenges MNOs face leveraging the highly relevant and unique assets in Internet identity.

The OpenID Foundation Mobile Profile Working Group’s profile builds on the worldwide adoption of OpenID Connect. The GSMA and OIX pilots offer an International test bed for both business and technical interoperability based on open standards. Taking together with the ongoing OIX White Papers and Workshops on the “Economics of Identity”, “chasm crossing” is within sight of the most pragmatic stakeholders.

Don

Ludovic Poitou - ForgeRockPOODLE SSL Bug and OpenDJ [Technorati links]

October 16, 2014 01:40 PM

A new security issue hit the streets this week: the Poodle SSL bug. Immediately we’ve received a question on the OpenDJ mailing list on how to remediate from the vulnerability.
While the vulnerability is mostly triggered by the client, it’s also possible to prevent attack by disabling the use of SSLv3 all together on the server side. Beware that disabling SSLv3 might break old legacy client applications.

OpenDJ uses the SSL implementation provided by Java, and by default will allow use of all the TLS protocols supported by the JVM. You can restrict the set of protocols for the Java VM installed on the system using deployment.properties (on the Mac, using the Java Preferences Panel, in the Advanced Mode), or using environment properties at startup (-Ddeployment.security.SSLv3=false). I will let you search through the official Java documentations for the details.

But you can also control the protocols used by OpenDJ itself. If you want to do so, you will need to change settings in several places :

For example, to change the settings in the LDAPS Connection Handler, you would run the following command :

# dsconfig set-connection-handler-prop --handler-name "LDAPS Connection Handler" \
--add ssl-protocol:TLSv1 --add ssl-protocol:TLSv1.1 --add ssl-protocol:TLSv1.2 \
-h localhost -p 4444 -X -D "cn=Directory Manager" -w secret12 -n

Repeat for the LDAP Connection Handler and the HTTP Connection Handler.

For the crypto manager, use the following command:

# dsconfig set-crypto-manager-prop \
--add ssl-protocol:TLSv1 --add ssl-protocol:TLSv1.1 --add ssl-protocol:TLSv1.2 \
-h localhost -p 4444 -X -D "cn=Directory Manager" -w secret12 -n

And for the Administration Connector :

# dsconfig set-administration-connector-prop \
--add ssl-protocol:TLSv1 --add ssl-protocol:TLSv1.1 --add ssl-protocol:TLSv1.2 \
-h localhost -p 4444 -X -D "cn=Directory Manager" -w secret12 -n

All of these changes will take effect immediately, but they will only impact new connections established after the change.


Filed under: Directory Services Tagged: directory, directory-server, ForgeRock, opendj, poodle, security, ssl, vulnerability
October 15, 2014

Julian BondHilarious bit of spam email today. [Technorati links]

October 15, 2014 07:52 PM
Hilarious bit of spam email today.

illuminatiworld781
Are you a business man or business woman, politician, musical, student and you want to be very rich,powerful and be famous in life. You can achieve your dreams by been a member of the Illuminati. With this all your dreams and heart desire can be fully accomplish, Illuminati cult online today and get instant sum of $25,000monthly for becoming a member and $100,000 for doing what you like to do . so if you have the interest, you can call, +447064249899 or +447053824724 

But I'm having trouble finding any 5s in 781, fnord.
[from: Google+ Posts]

Nat Sakimura東大は5位から28位に! トップはハーバード大学。ダイヤモンド「使える人材輩出大学」ランキングを再計算してみた [0] [Technorati links]

October 15, 2014 12:06 PM

電車の中でFacebookを見ていたら、週刊ダイヤモンド(10/18)に掲載された「使える人材を排出した大学ランキング」が流れてきた[1]。まだ雑誌自体をゲットしていないので、アンケートの詳細が分からないが、添付されていた画像によると、使える大学、使えない大学それぞれ上位5校を問い、1位5点、2位4点、…、5位1点で集計し、

「使える(A)」ー「使えない(B)」

を計算し、これでランキングをとったものらしい。その結果がこれだ。

(出所)10/18発売週刊ダイヤモンド「使える人材を輩出した大学ランキング」

(出所)10/18発売週刊ダイヤモンド「使える人材を輩出した大学ランキング」

全く良くわからないランキングだ。

まずもって、順位という序数を基数変換してそれらを足しあわせた数字が何を意味するのかよくわからない上、その数字の引き算をしてランキングする意味がますますわからない[2]。が、そんなことを嘆いていてもしかたがないので、もうちょっとマシなランキングということで、

「使える比率」=「使える(A)」/(「使える(A)」+「使えない(B)」)

を求めて、ランキングを計算してみた。それが、表1。

表1 ー 改訂版使える人材排出大学

Rank ダイヤモンド
ランキング
大学名 使える
(A)
使えない
(B)
ダイヤモンド
得点
(A)-(B)
(A)率
1 25 ハーバード大学 24 0 24 100.00%
2 28 国際教養大学 16 0 16 100.00%
3 6 東京工業大学 496 79 417 86.26%
4 11 国際基督教大学 209 41 168 83.60%
5 4 一橋大学 580 115 465 83.45%
6 27 広島大学 22 5 17 81.48%
7 1 慶応義塾大学 2170 536 1634 80.19%
8 9 東北大学 250 62 188 80.13%
9 7 大阪大学 374 108 266 77.59%
10 3 京都大学 1041 328 713 76.04%
11 10 東京理科大学 256 81 175 75.96%
12 13 神戸大学 163 52 111 75.81%
13 24 津田塾大学 50 18 32 73.53%
14 2 早稲田大学 1838 684 1154 72.88%
15 22 電気通信大学 57 22 35 72.15%
16 30 京都工芸繊維大学 18 7 11 72.00%
17 14 北海道大学 144 59 85 70.94%
18 29 小樽商科大学 21 9 12 70.00%
19 21 東京外国語大学 65 28 37 69.89%
20 12 同志社大学 231 105 126 68.75%
21 16 名古屋大学 113 52 61 68.48%
22 17 関西学院大学 140 82 58 63.06%
23 20 九州大学 110 66 44 62.50%
24 8 明治大学 520 322 198 61.76%
25 25 千葉大学 68 44 24 60.71%
26 22 横浜国立大学 123 88 35 58.29%
27 15 上智大学 230 165 65 58.23%
28 5 東京大学 1596 1161 435 57.89%
29 19 中央大学 217 171 46 55.93%
30 18 筑波大学 103 82 21 55.68%
31 30 立教大学 139 128 11 52.06%
32 44 関西大学 108 138 -30 43.90%
33 60 青山学院大学 166 307 -141 35.10%
34 61 日本大学 239 457 -218 34.34%
35 43 南山大学 18 46 -28 28.13%
36 55 成蹊大学 41 114 -73 26.45%
37 62 法政大学 125 348 -223 26.43%
38 35 京都産業大学 11 31 -20 26.19%
39 50 國學院大學 23 67 -44 25.56%
40 56 近畿大学 38 119 -81 24.20%
41 58 獨協大学 27 116 -89 18.88%
42 34 成城大学 6 26 -20 18.75%
43 51 専修大学 12 62 -50 16.22%
44 49 お茶の水女子大学 10 53 -43 15.87%
45 41 埼玉大学 6 33 -27 15.38%
46 54 東海大学 14 78 -64 15.22%
47 48 東洋大学 9 51 -42 15.00%
48 37 群馬大学 4 27 -23 12.90%
49 59 学習院大学 22 151 -129 12.72%
50 57 明治学院大学 13 95 -82 12.04%
51 53 駒沢大学 8 64 -56 11.11%
52 39 鳥取大学 2 26 -24 7.14%
53 52 帝京大学 4 55 -51 6.78%
54 40 高崎経済大学 2 28 -26 6.67%
55 33 宇都宮大学 0 20 -20 0.00%
56 32 関東学院大学 0 20 -20 0.00%
57 36 明星大学 0 21 -21 0.00%
58 38 茨城大学 0 24 -24 0.00%
59 42 亜細亜大学 0 28 -28 0.00%
60 45 文教大学 0 37 -37 0.00%
61 47 国士舘大学 0 39 -39 0.00%
62 46 名城大学 0 39 -39 0.00%

ダイヤモンドランキングでは25位に低迷していたハーバード大学がめでたくトップに躍り出た。

まぁ、いずれにせよなんだかよくわからない数字ではあるのだが…。ご参考まで。

週刊ダイヤモンド、買いに行くか…。

[0] ブログアップしてから『東大は5位から29位に! 「使える人材輩出大学」ランキングを再計算してみた』http://blogos.com/article/96472/ というBLOGOSの記事が目に入った。手元の計算だと、東大は28位だったので、あえてタイトルを真似っ子バージョンに改定してみた。元のタイトルは、『ダイヤモンド「使える人材排出大学ランキング」←ダメでしょ』

[1] 『【速報】 仕事で使える大学ランキングの「ワースト1」発表』http://netgeek.biz/archives/23399

[2] 「使える/使えない」というのが、おそらくその職場にいる人達の中での相対評価であろうことから、それを同じ座標において比較することも意味がよくわからないが、それはおいておいたとしても。

 

 

Kuppinger Cole09.12.2014: Access Governance for today’s agile, connected businesses [Technorati links]

October 15, 2014 09:29 AM
In KuppingerCole

In today’s fast changing world the digitalization of businesses is essential to keep pace. The new ABC – Agile Businesses Connected – is the new paradigm organizations must follow. They must connect to their customers, partners and associates. They must become agile to respond to the changing needs of the market. They must understand, manage, and mitigate the risks in this connected world. One important aspect of this is the governance of the ever-increasing number of identities – customers,...
more
October 14, 2014

Radovan Semančík - nLightThe Old IDM Kings Are Dead. Long Live the New Kings. [Technorati links]

October 14, 2014 06:04 PM

It can be said that Identity Management (IDM) was born in early 2000s. That was the time when many people realized that a single big directory just won't do it. They realized that something different was needed to bring order into the identity chaos. That was the dawn of a user provisioning system. Early market was dominated by only a handful of small players: Access360, Business Layers, Waveset and Thor. Their products were the children of the dot-com age: enterprise software built on the state-of-the-art platforms such as J2EE. These products were quite terrible by todays standards. But they somehow did the job that no other software was able to do. Therefore it is obvious that these companies got acquired very quickly. Access360 is now IBM Tivoli product. Business Layers was acquired by Netegrity which was later acquired by CA. Waveset was taken by Sun. And Thor ended up in Oracle. By 2005 the market was "consolidated" again.

The development of all the early products went on. A lot of new features was introduced. Also some new players entered the market. Even Microsoft hastily hopped on this bandwagon. And the market became quite crowded. What started as a provisioning technology later became "compliance" and "governance" to distinguish individual products. And even more features were added. But the basic architecture of vast majority of these products remained the same during all these years. One just cannot easily evolve the architecture and clean-up the product while there is an enormous pressure to deliver new features. Therefore the architecture of these products still remains essentially in the state as it was originally designed in early 2000s. And it is almost impossible to change.

That was the first generation of IDM systems.

The 2000s was a very exciting time in software engineering. Nothing short of a revolution spread through the software world. The developers discovered The Network and started to use SOAP. Which lead to SOA craze. And later the new age developers disliked SOAP and created RESTful movement. XML reached its zenith and JSON became popular. The idea of object-relational mapping spread far and wide. The term NoSQL was coined. The heavyweight enterprise-oriented architectures of early 2000s were mostly abandoned and replaced by lightweight network-oriented architectures of late 2000s. And everything was suddenly moving up into the clouds.

It is obvious that the old-fashioned products that built up a decade of technological debt cannot keep up with all of this. The products started to get weaker in late 2000s. Yet only a very few people noticed that. The first-generation products gained an enormous business momentum and that simply does not go away from day to day. Anyway, in 2010 there was perhaps only a couple of practical IDM products left. The rest was too bloated, too expensive and too cumbersome to be really useful. Their owners hesitated for too long to re-engineer and re-fresh the products. But it is too late to do that now. These products needs to be replaced. And they will be replaced. Soon.

This situation is quite clear now. But it was not that clear just a few years ago. Yet several teams begun new projects in 2010 almost at the same time. Maybe that was a triggered by Oracle-Sun acquisition or maybe the time was just right to change something ... we will probably never know for sure. The projects started almost on a green field and they had an enormous effort ahead of them. But the teams went on and after several years of development there is whole new breed of IDM products. Lean, flexible, scalable and open.

This is the second generation of IDM systems.

The second-generation systems are built on the network principles. They all have lightweight and flexible architectures. And most of them are professional open source! There is ForgeRock OpenIDM with its lightweight approach and extreme flexibility. Practical Evolveum midPoint with a very rich set of features. And Apache Syncope with its vibrant and open community. These are just three notable examples of the new generation. A generation of IDM systems that has arrived right on time.

(Reposted from https://www.evolveum.com/old-idm-kings-dead-long-live-new-kings/)

CourionCourion Named a Leader in Access Governance by KuppingerCole [Technorati links]

October 14, 2014 01:38 PM

Access Risk Management Blog | Courion

Kurt JohnsonToday Courion was named a leader in the 2014 Leadership Compass for Access Governance by KuppingerCole, a global analyst firm. Courion’s Access Assurance Suite was recognized for product features and Innovation, and as a very strong offering that covers virtually all standard requirements. In the management summary of the report, Courion is highlighted as the first to deliver advanced access intelligence capabilities.KuppingerCole Leadership Compass

Courion was also recognized as a leader in the Gartner Magic Quadrant for Identity Governance and Administration (IGA) and as a leader in the KuppingerCole Leadership Compass for Identity Provisioning earlier this year.

blog.courion.com

Julian BondHere we are on the surface of this spaceship, travelling through time and space at 1 second per second... [Technorati links]

October 14, 2014 12:45 PM
Here we are on the surface of this spaceship, travelling through time and space at 1 second per second (roughly) and about 360 km/sec towards Leo.

So what I want to know is, who's in charge? Because I have to say there are some aspects of the cruise that I'm not entirely happy with.
[from: Google+ Posts]

Mike Jones - MicrosoftJOSE -34 and JWT -28 drafts addressing IESG review comments [Technorati links]

October 14, 2014 12:37 PM

IETF logoUpdated JOSE and JWT specifications have been published that address the IESG review comments received. The one set of normative changes was to change the implementation requirements for RSAES-PKCS1-V1_5 from Required to Recommended- and for RSA-OAEP from Optional to Recommended+. Thanks to Richard Barnes, Alissa Cooper, Stephen Farrell, Brian Haberman, Ted Lemon, Barry Leiba, and Pete Resnick for their IESG review comments, plus thanks to Scott Brim and Russ Housley for additional Gen-ART review comments, and thanks to the working group members who helped respond to them. Many valuable clarifications resulted from your thorough reviews.

The specifications are available at:

HTML formatted versions are available at:

Julian BondOne for the bucket list. Get yourself to a train station in Morocco in early June 2015. Get picked up... [Technorati links]

October 14, 2014 11:55 AM
One for the bucket list. Get yourself to a train station in Morocco in early June 2015. Get picked up and taken to a small village in the Rif mountains where you'll be looked after and fed by families in their homes for 3 days. The Master Musicians Of Joujouka then play their special blend of Sufi music each day and late into each night. Places limited to 50 and it's €360 all in.

http://thequietus.com/articles/16457-master-musicians-of-joujouka-festival-2015

There's a slightly cheaper one day version (€100) on 15-Nov-2014 tied in with the Beat Conference (17/19 Nov) in Tangier to celebrate the 100th anniversary of William Burrough's birth. The conference fee is basically B&B in the hotel plus food.
 The Quietus | News | Master Musicians Of Joujouka Festival 2015 »
Plus, one-off date in the village this November as part of William S. Burroughs centenary celebrations

[from: Google+ Posts]

Kuppinger ColeSAP enters the Cloud IAM market – the competition becomes even tougher [Technorati links]

October 14, 2014 07:28 AM
In Martin Kuppinger

The market for Cloud IAM and in particular Cloud User and Access Management – extending the reach of IAM to business partners, consumers, and Cloud applications through a Cloud service – is growing, both with respect to market size and service providers. While there were a number of start-ups (such as Ping Identity, Okta and OneLogin) creating the market, we now see more and more established players entering the field. Vendors such as Microsoft, Salesforce.com or Centrify are already in. Now SAP, one of the heavyweights in the IT market, has recently launched their SAP Cloud Identity Service.

The focus of this new service is managing access for all types of users, their authentication, and Single Sign-On, to on-premise applications, SAP Cloud applications, and 3rd party Cloud services. This includes capabilities such as SSO, user provisioning, self-registration and user invitation, and more. There is also optional support for social logins.

Technically, there is a private instance per tenant running on the SAP Cloud Identity Service, which acts as Identity Provider (IdP) for Cloud services and other SAML-ready SaaS applications, but also as an interface for external user authentication and registration. This connects back to the on-premise infrastructure for accessing SAP systems and other environments, providing also SSO for users already logged in to SAP systems.

With this new offering, SAP is becoming an interesting option in that field. While they do not sparkle with a large number of pre-configured Cloud services – some other players claim to have more than 3,000 Cloud services ready for on-boarding – SAP provides a solid conceptual approach to Cloud IAM, which is strongly tied in all the SAP HANA platform, the SAP HANA Cloud, and the on-premise SAP infrastructures.

This tight integration into SAP environments, together with the fact that SAP provides its own, certified data center infrastructure, plus the fact that it is from SAP (and SAP buyers tend to buy from SAP) makes it a strong contender in the emerging Cloud User and Access Management market.

October 13, 2014

Julian BondChina's per capita CO2 production (7.2 tonnes pa) is now higher than the EU's (6.8). It's still half... [Technorati links]

October 13, 2014 07:00 PM
China's per capita CO2 production (7.2 tonnes pa) is now higher than the EU's (6.8). It's still half the USA's (16.5) per capita figure but it's increased 4 fold since 2001.

The totals are more like China 29%, USA 15%, EU 10% of total pa CO2 production.

So now what happens?

http://rationthefuture.blogspot.co.uk/2014/10/world-carbon-emissions-out-of-control.html
http://www.bbc.co.uk/news/science-environment-29239194

Obviously, it's all OK, because Paul Krugman (Nobel prize winning economist and NYT columnist) says "Saving the planet would be cheap; it might even be free." with a bit of carbon tax, carbon credits and other strong measures to limit carbon emissions.
http://www.nytimes.com/2014/09/19/opinion/paul-krugman-could-fighting-global-warming-be-cheap-and-free.html
And anyone who disagrees is just indulging in "climate despair".
 World carbon emissions out of control »
I am writing another post that has been triggered by a news article, only this time it is about climate change. The headline 'China's per capita carbon emissions overtake EU's' came as a bit of a shock. 'While the per capita ...

[from: Google+ Posts]

Kuppinger Cole11.12.2014: Understand your access risks – gain insight now [Technorati links]

October 13, 2014 10:35 AM
In KuppingerCole

Access Intelligence: Enabling insight at any time – not one year after, when recertifying again Imagine you have less work and better risk mitigation in your Access Governance program. What sounds hard to achieve can become reality, by complementing traditional approaches of Access Governance with Access Intelligence: Analytics that support identifying the biggest risks, simple, quick, at any time. Knowing the risks helps in mitigating these, by running ad hoc recertification only for these...
more

Kuppinger ColeAdvisory Note: Security Organization, Governance, and the Cloud - 71151 [Technorati links]

October 13, 2014 10:14 AM
In KuppingerCole

The cloud provides an alternative way of obtaining IT services that offers many benefits including increased flexibility as well as reduced cost.  This document provides an overview of the approach that enables an organization to securely and reliably use cloud services to achieve business objectives.


more

Kuppinger ColeAdvisory Note: Maturity Level Matrixes for Identity and Access Management/Governance - 70738 [Technorati links]

October 13, 2014 09:31 AM
In KuppingerCole

KuppingerCole Maturity Level Matrixes for the major market segments within IAM (Identity and Access Management) and IAG (Identity and Access Governance). Foundation for rating the current state of your IAM/IAG projects and programs.


more
October 12, 2014

Anil JohnWhat Is the Role of Transaction Risk in Identity Assurance? [Technorati links]

October 12, 2014 01:30 PM

Identity assurance is a measure of the needed level of confidence in an identity to mitigate the consequences of authentication errors and misuse of credentials. As consequences become more serious, so does the required level of assurance. Given that it typically has not taken transaction specific data as input, how should digital services factor in transaction risk into their identity assurance models?

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.


These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

October 09, 2014

ForgeRockTaking heart about the future of identity [Technorati links]

October 09, 2014 06:09 PM

By Eve Maler, ForgeRock VP of Innovation and Emerging Technology

A lot of us doing digital identity know that innovation in this space comes in fits and starts. There have been times when the twice-yearly Internet Identity Workshops felt like exercises in marking time: Okay, if InfoCard isn’t quite it for consented attribute sharing, what’s the answer? And what do you mean everyone doesn’t yet have a server under their desks at home running an OpenID 2.0 identity provider?

At other times, there’s excitement in the air because we feel like we’re on to something big, and if we just align our polarities in the right way, we can really get somewhere. It’s out of these moments that OpenID Connect (a merger of ideas from Facebook and OpenID Artifact Binding folks) — for one example — was born.

I for one am feeling that excitement now. There’s a confluence of factors:

The “new Venn of access control” that I talked about at ForgeRock’s Identity Relationship Management Summit in June is coming — and all of us practitioners have a chance to make dramatic progress if we can just…align.

There are a couple of key opportunities to do that in the short term: IIW XIX in Mountain View in three weeks, and the IRM Summit in Dublin — along with a Kantara workshop on the trusted identity exchange and the “age of IRM agility” — in four.

If you join me at these venues, you can catch up on important User-Managed Access (UMA) progress, and also hear about — and maybe get involved in — an exciting new group that Debbie Bucci of HHS ONC and I are working to spin up at the OpenID Foundation: the HEAlth Relationship Trust Work Group. The HEART WG is all about profiling the “Venn” technologies of OAuth, OpenID Connect, and UMA to ensure that patient-centric health data sharing is secure, consented, and interoperable the world over. (If you’re US-based like me and have visited a doctor lately, you’ve probably been onboarding to a lot of electronic health record systems — how would you like to help ensure that these systems are full participants in the 21st century? Amirite?)

See you there!

- Eve (@xmlgrrl)

 

The post Taking heart about the future of identity appeared first on ForgeRock.

Kuppinger ColeLeadership Compass: Access Governance - 70948 [Technorati links]

October 09, 2014 12:20 PM
In KuppingerCole

Leaders in innovation, product features, and market reach for Identity and Access Governance and Access Intelligence. Your compass for finding the right path in the market.


more

Julian BondAssuming it's not actually raining, we're planning on going to the Cody Dock, Cody Wilds launch party... [Technorati links]

October 09, 2014 08:13 AM
Assuming it's not actually raining, we're planning on going to the Cody Dock, Cody Wilds launch party on Sunday. https://www.facebook.com/events/805976122778470/
with a short detour to Trinity Buoy Wharf to see the Long Player http://longplayer.org/visit/.

For all the peeps in East London, if you've got nothing better to do this Sunday afternoon, you might like it. Especially if you bicycle, you need to get yourself to Twelvetrees Crescent and then walk/cycle down the East side of the Lea and Bow Creek. Bromley-By-Bow is close, or come at it from the other side via Star Lane DLR. https://www.google.co.uk/maps/@51.5237749,-0.0079912,18z/data=!5m1!1e3

Of course, if it's raining all day, then maybe not.

More detail here.
http://diamondgeezer.blogspot.co.uk/2014/10/cody-wilds.html
 CODY WILDS LAUNCH PARTY | Facebook »
CELEBRATING THE LAUNCH OF CODY WILDS CAMPAIGN! Help us win funding to build a wildflower haven on the River Lea! Come down to Cody Dock for an amazing day of workshops, music, food, drink, merriment, dancing and voting! Sip on mulled cider made with apples from Danny the Woodsman's orchard, ...

[from: Google+ Posts]

Vittorio Bertocci - MicrosoftThe use of Azure AD Behind “Deploy to Azure” [Technorati links]

October 09, 2014 07:01 AM

About one week ago I got a mail from my good friend Brady, who was looking for some clarifications about our Azure AD multitenant web app sample. That piqued my curiosity: Brady doesn’t stay still for one sec and we chat about identity all the time, but multitenancy?

As it turns out, he was cooking a really awesome project: a way of fast-tracking deployments of GitHub repos to new or existing Azure web sites. As of this morning, he release the project’s bits and blogged about it.
You should really head to Brady’s blog and check out the details of the project. In this post I’ll add some color to its identity backstory.

Deploy to Azure and Azure AD’s Multitenancy Features

The idea behind Deploy to Azure is beautifully simple.
Say that you have the source of a great web app in a GitHub repo, and that you want to make extra easy for anybody to deploy your web app in their own Azure web site.
Deploy to Azure provides you with a simple button which will do just that, all you need to do is adding it to your repo. The button is in fact just an affordance for following a deep link, which points to a web application.
The web app will start by asking you to authenticate with your Azure AD credentials – the ones corresponding to your Azure subscription (more about this later). If you happen to be already authenticated, you’ll skip this step and find yourself single-signed-on in the app right away.
The app will ask you some basic questions about your deployment, such as the name of the Azure web site you want to target. Done that, by clicking the “Deploy” button you’ll trigger the web site creation (if necessary) and subsequent deployment. Straight from a public GitHub repository to an Azure web site of your choosing!

Pretty neat, right? Once again, I encourage you to visit Brady’s blog to learn abut all the MAML (Microsoft Azure Management Libraries) magic that powers this sample.

That said, let’s see what makes Deploy to Azure tick from the identity perspective.

Signing In

The web app that makes all this possible is a fork of our Azure AD multitenant web app sample.
Deploy to Azure is significantly simpler than the original sample, given that it only manipulates 1st party resources – that is, Microsoft Azure management API. The original sample shows you how to protect your own resources, hence it contains code that is necessary for representing those resources and restricting access only to known users. In Deploy to Azure the authorization logic is applied automatically – Azure already knows what users can deploy to web sites, hence we dispensed of all the onboarding logic and associated custom validations.

The sign in flow is enabled by the brand new ASP.NET OWIN OpenId Connect (OIDC) middleware. If you navigate to App_start/startup.auth.cs, you’ll locate right away the crucial bits carrying the sign on configuration:

app.UseOpenIdConnectAuthentication( 
  new OpenIdConnectAuthenticationOptions 
  { 
    ClientId = clientId, 
    Authority = Authority, 
    TokenValidationParameters = new System.IdentityModel.Tokens.TokenValidationParameters 
    { 
         // instead of using the default validation (validating against a single issuer value, as we do in line of business apps),  
         // we inject our own multitenant validation logic 
         ValidateIssuer = false, 
      }, 
// ...other stuff

 

Here there’s the interesting bit about multitenancy.
In the simplest case, apps configured to use Azure AD are set up to work with ONE specific directory tenant. That’s the typical line of business app scenario: a contoso.com developer creates one app and wants to make it available to all his/her contoso.com colleagues. In that case, the value of Authority would be “https://login.windows.net/contoso.com” and the OIDC middleware would make sure that only users presenting tokens issued by contoso.com’s Azure AD tenant would be accepted.

That would be too restrictive for Deploy to Azure: here we want to be able to accept anybody who has a valid Azure AD account, no matter from which directory he/she comes from. We relax the default issuer validation policy via that TokenValidationParameter init you saw in the snippet above: not checking for any particular issuer means that we will be accepting tokens from any Azure AD tenant.
That solves the validation part, but we still need to do something about the Authority.
Azure AD exposes a special endpoint, referred to as “common”, of the form “https://login.windows.net/common”. “Common” makes it possible to gather user credentials without having to decide in advance from which Azure AD tenant he/she comes from. All you need to do is using the common endpoint as Authority, and the app is configured to accept tokens from any Azure AD user without the need for you to understand anything of the following explanations. I am adding those in case you really want to understand what’s going on in there.

I described the common endpoint in details in this other post. Here I’ll offer a super quick explanation of how the mechanism works, in part because that’s useful knowledge if you often write multitenant apps against AAD and in part because it will make it easier later in the post to explain why Deploy to Azure currently suffers from an important limitation.

Here there’s a rough diagram of how the common endpoint sign in works. I am assuming that the user is not already signed in – you can easily derive the behavior in that case by considering that steps 1 to 3 would take place automatically, without showing any UX to the user.

image

1 – The user clicks on the Deploy to Azure button. That triggers a link that leads to a protected action on the Deploy to Azure web app, which in turns generate a web sign in request via the common endpoint.

2 – Azure AD serves back a generic credential gathering experience.

3 – As the user types the username, Azure AD infers the actual Azure AD tenant from which the user is from. Depending on the nature of the tenant, this might affect the experience: for example, if contoso is what we call a federated tenant (that is to say, its directory is a cloud projection of an on-premises one) then the auth flow would be directed to the local contoso ADFS instance.

4 – With the tenant correctly identified, the user is authenticated and the requested token is issued by the target tenant. That token is what the OWIN middleware was waiting for to declare the caller signed in.
Note that, the first time this flow takes place, the user will be informed about what resources and permissions Deploy to Azure is requesting: actual issuance of the token wil depend on whether the user grants or deny consent. I didn’t depict this part for simplicity.

That’s pretty neat! As mentioned earlier, if your app were to façade your own resources you might want to add extra checks on from which tenant the user is coming from – see the original sample for that – but for Deploy to Azure that’s all we need to do for web sign on.

Obtaining a Token for Accessing the Azure Management API

The web sign on is only the beginning. Deploy to Azure needs to be able to invoke the Azure management API. In order to do so, it needs to acquire an access token with the right permissions.

OpenId Connect offers a great hybrid flow that non only signs the user in a web application, but at the same time it also obtains an authorization code that can be redeemed for access and refresh tokens for a resource of your choice.

You can see the code redemption logic in action in the AuthorizationCodeReceived notification, once again in startup.auth.cs. ADAL makes it really easy, hiding all of the underlying protocol complexity and saving the tokens in a distributed cache that can later be accessed from anywhere in the web application. The initial access and refresh tokens obtained are for the Graph API, while Deploy to Azure needs tokens for the Azure management API: however, thanks to the fact that all refresh tokens issued by Azure AD today are multi-resource refresh tokens, having those initial tokens in the cache allows Deploy to Azure to silently (e.g. without user prompts) obtain all the other tokens it needs. You can observe that principle in action in the DeployController, where the Index action taps onto the common token cache to get the tokens it needs for performing its Azure management magic. For details about that, see Brady’s post.

Known Limitations & Workarounds

Deploy to Azure is a super early preview, as Brady mentions. Here I’d like to dig a bit in a shortcoming that you are pretty likely to notice: in its current form, Deploy to Azure will not work with a MSA (formerly known as LiveID) account, even if that MSA is a global administrator of your Azure AD and subscription.

The root cause of this comes from the use of the common endpoint and the fact that today knowing the MSA of a user is not enough to infer which Azure AD tenant should issue the requested token. Below you can find a version of the earlier sign on diagram, modified to show what happens when you use an MSA.

image

1 – The user clicks on the Deploy to Azure button. That triggers a link that leads to a protected action on the Deploy to Azure web app, which in turns generate a web sign in request via the common endpoint.

2 – Azure AD serves back a generic credential gathering experience.

3 – As the user types the username, Azure AD infers that the user comes from MSA and redirects accordingly. Here the suer successfully authenticates.

4 – The flow goes back to AAD, carrying a token proving that the user successfully authenticated with MSA. However, that does not help Azure AD to decide which tenant should issue the token to Deploy to Azure! AN MSA user is not tied to any specific AAD tenant, and in fact can be a guest in many (as depicted). Currently Azure AD does not know how to eliminate the ambiguity, and fails to issue a token.

The main workaround available today entails avoiding the use of common. That can be achieved in different ways:

A – if you want to offer the feature to your own organization, or any known organization, you can fork the code and use the specific organization identifier in lieu of common. Once that happens, you can safely use guest MSAs as there is no doubt on which tenant should issue the token.

B – this is a bit more elaborate. Instead of automatically triggering authentication upon accessing Deploy to Azure, you could offer one experience that asks the user for the tenant (in form of its domain) that they want to use. You could then inject that back in the Authority, basically getting back to A. You might even create a tracking cookie to pick up that choice automatically the next time around.

I hope that the issue will be fixed, but hopefully those workarounds will get you going if you want to kick the tires with the app!

Next Steps

Working with Brady is always a lot of fun! I am excited to see Deploy to Azure making use of Azure AD and ADAL features to enable such compelling scenario, and I can’t wait to see how the project will evolve with your contributions Smile

October 08, 2014

Christopher Allen - AlacrityKathy Sierra of Serious Pony on Trolls [Technorati links]

October 08, 2014 06:47 PM

Paul MadsenSocial Media 2 Factor authentication [Technorati links]

October 08, 2014 04:33 PM

Premise

A user can authenticate to a web application (or a federation server) by sending an update (tweet, Facebook update, etc) with a randomly generated hashtag previously delivered to the user in the login interface. 

The fundamental requirement is that 

  1. the user be able to demonstrate ownership of the social account previously connected to their account at the authentication server by including a challenge string in a tweet, update etc
  2. the authentication server be able to determine that a particular challenge string was added to a tweet, update etc associated with a particular social account 

User Experience


Step 1 :


User binds their social account to the authentication server

Screen Shot 2014-05-22 at 3.12.58 PM.png

Alternatively, the ‘binding’ could consist solely of the user telling the authentication server their Twitter handle.

Step 2:


Later, User visits login page

User logs in with first factor, ie password, or SSO

Login UI displays randomly generated challenge string
Screen Shot 2014-05-22 at 3.33.01 PM.png

Authentication server stores away challenge string against that user’s account

Alternatively, the challenge mechanism could be via Twitter, ie the authentication server sends the user a tweet, and the User response would be a RT.

Step 3:


User sends tweet , including challenge hashtag from Step 2

Screen Shot 2014-05-22 at 3.35.27 PM.png

The response format & channel will depend on the nature of the challenge and how the user’s social media account were bound to the account at the authentication server.

Step 4:

After displaying the hashtag challenge to the user , the authentication server polls the user’s tweet stream (or equivalent) on some schedule for a tweet (or post) containing the challenge hashtag.

If such a tweet is found within some time period, the authentication page displays successful login.

Discussion


  1. The default would be for the user to manually type the challenge string into their tweet. Might it be possible for the authentication server to instead/also display a QR code, for the user to scan and so launch their mobile Twitter client with the tweet ready to send?
  2. Instead of a string, the challenge could consist of a link to a specific picture or some other media
  3. If the user has previously authorized other applications to be able to send tweets on their behalf, then those other applications would potentially be able to send a response tweet, but only if they were able to know the challenge. Consequently, the authentication model is likely only relevant for a 2nd factor, as having the user first authenticated with the other factor would prevent other applications from knowing the challenge string.
  4. if the authentication server were able to determine how many applications the user has granted the ability to tweet on their behalf, then conceivably it could factor that into it’s assessment of assurance
  5. There could be a viral component to the marketing of the authentication service, as the user’s followers would see the authentication tweets
  6. Is there a risk of violating Twitter ToS?

Radovan Semančík - nLightFive Practical Ways to Ruin Your IAM Project [Technorati links]

October 08, 2014 02:42 PM

Identity and Access Management projects are very common nowadays. The interesting fact is that too many of them either vastly under-deliver or totally fail. I have been fighting in the IAM trenches for many long years and I have seen both successful and failed projects. It looks like to me that the IAM projects are surprisingly easy to ruin. I though that it would be good to summarize some of the worst mistakes.

1. Start Big and Go Down the Waterfall

Everybody knows the waterfall development model even though you may not know that it is called "waterfall". It goes like this: analysis -> design -> implementation -> testing -> deployment -> failure. This model is known to the software engineers since 1970s. And even back then it was considered to be fundamentally flawed and it was only documented to make a bad example. The interesting fact is that it is widely used until today. It is sometimes even recommended as a best practice. But that does not change anything about the fact that the process is flawed and that it is a recipe for almost certain failure.

Do not start your IAM project without any iterations in it. You may as well throw the money out of window. You IAM project needs to be decomposed into a smaller iterations. Each of the iterations needs to have its own small analysis, design, implementation, test and deployment. The iteration should not be longer than 3-5 months. Each of the iterations should provide a tangible value. Each of them needs to pass its own ROI assessment. If it does not then skip the iteration or stop the project. Plan first couple of iterations before the project starts and check ROI. If it fails then do not start the project at all and re-consider the choice of technology and delivery partner.

Are you using a commercial IDM product that has huge licensing cost and therefore you cannot proceed in iterations without ruining ROI? This has a very simple solution: do not use such product. The IAM market is crowded. If you look around a bit you can surely find a cheaper product. The capabilities of commercial IAM products are very similar therefore there are many options to choose from. If you do not want to change the product then ask for a discount. I have reports that many vendors are happy to discount the license fee by giving off 50% or even more. Or even better: go for open source product. Open source products have reached (and exceeded) the capabilities of many commercial IAM products during last few years. Licensing cost of open source is zero - by definition. Therefore there is absolutely no reason for paying huge license fees. Every IAM project must be able to work in an iterative fashion and maintain a ROI. If it does not then do not go for it.

2. Spend the Budget on Product

Buy product, deploy it and have fun. This simple plan may work for some technologies but IAM is definitely not one of them. IAM products typically do not work out-of-the-box. They need to be configured, customized, extended, connectors, plug-ins, workflows and widgets need to be developed, etc. The cost of the product is only a part of the total cost. And in fact it is quite a small part. Have a look at the cost structure of your project. If you plan to spend more on the product than you plan to spend on services then stop immediately. Such project is extremely likely to fail or run very high over the budget. The ratio of product to service cost needs to be at most 50:50 - and this is still a very risky endeavor. The ratio of 20:80 in favor of services is perhaps the most realistic one.

Also forget about using expensive products. The differences between individual IAM products are quite small. Any reasonably good product can satisfy your needs. The best-of-the-breed IAM is a myth. A marketing strategy. What you need is not a product, you need a solution. Therefore do not buy the product and then look for a team. Look for an experienced team that can deliver a solution first. And let them recommend a product. A product without a good deployment team is a waste of energy and money. On the other hand a good deployment team can satisfy your needs with almost any product.

3. Suite Up

All the big vendors offer technology suites: a set of products that are sold under the same brand. Such suites often create an impression that together they form one complete and integrated solution. However this is almost never the case. Most suites are created by acquisitions. The acquired products are technologically very heterogeneous almost to the point that they cannot really be integrated together. Therefore the "suite" is usually only a thick layer of paint designed to cover product differences. It does not make product integration considerably easier. Therefore there is usually no significant technological difference between buying a suite or buying several independent products.

But there is one very significant business difference if you buy a suite: vendor lock-in. The vendor that offers a complete suite has only a very little incentive to support integration of his products with another products from a competing vendor. Therefore suites are actually much harder to integrate with third-party products. If you are going to buy a suite you are going to stay with that suite forever. This is a life sentence. And it is going to be very expensive.

4. Customize Ad Nauseam

Customization is a necessary part of any non-trivial IAM deployment. But it has to have its limits. Customizations are by definition pieces of functionality that are not reusable and therefore are not part of the product. Therefore too much customization naturally means long and expensive deployment. But it is worse than that. Many IAM products are notoriously difficult to upgrade. The upgrades often ruin all the customizations. But even if an upgrade goes smoothly part of the customizations is almost certain to break and needs to be modified. This creates a huge maintenance burden. In a lot of practical cases the operation team simply decides not to upgrade the product any more. However this creates a technological debt. A need to upgrade the product eventually comes sooner or later. Then the only practical option is to re-implement the complete solution from the ground up. Which will completely ruin your initial investment.

Therefore keep the amount of customization reasonably low. The best project that I have witnessed are those where both the technology and the business processes adapt to each other. However this is a slow process and it just cannot be delivered in one huge package. This needs to be implemented using an iterative approach.

Also completely avoid products that require you to make customizations to support the very basic IAM features. Such products are very expensive road to hell. Prefer products that have common functionality present out-of-the-box but are still reasonably customizable. And choose a deployment team that can take advantage of such products.

5. Purchase an IAM Project

Many people believe that the IAM effort starts with analysis and ends with deployment. But that is definitely not the case. The real life of an IAM solution actually begins with a deployment. The day when the deployment team hands the IAM solution to the operations team is actually the very first day of IAM lifetime. Similarly to security practices the IAM is not a project. It is a program. It may have a start but it does not have an end. It is a continuous process. It is not possible to reflect all the requirements before the system reaches operation. IAM solution needs to be continually improved and adjusted. Therefore if your contract with the delivery team end when the solution is deployed then you are very likely doomed to fail in the long run.

Build up a long-term cooperation. Choose a delivery team that is able to work with you even after the system is deployed. Make sure that you can afford it. If you cannot then there is absolutely no point in starting the IAM project in the first place. Make sure that the team is empowered to really help you. Conducting a Proof of Concept (PoC) before the project start is a very good idea. Make sure that the PoC contains at least one very advanced scenario. Any fool can download and install the product. But going through an advanced scenario in a limited time can tell you whether the team really has the skills to use the product, whether they have good support from the vendor and also whether the vendor is open to support non-standard scenarios.

Bonus trick: If you are evaluating an IAM product have a look at the logfiles. Let the engineers show you logfiles from the actual deployment (e.g. a PoC deployment or a demo). Choose a random part of the logfile and ask the engineers to interpret the logfile for you. Let them explain what each line of the logfile means. If they cannot do it then the team may not have enough skills to use the product. Or maybe the logfile is not intelligible at all and even the most skilled team cannot use it. Which means that the deployment team will be powerless if they encounter a product problem and they will need to ask vendor for assistance every time. Which means lock-in and very slow response times. Avoid this at any cost.

(Reposted from https://www.evolveum.com/five-practical-ways-ruin-iam-project/)

Paul MadsenA symmetrical NAPPS model [Technorati links]

October 08, 2014 02:16 PM

The NAPPS WG in the OIDF is defining a framework for enabling SSO to native applications.

One challenge has been in supporting 3rd party native applications from large SaaS that already have an OAuth & token infrastructure (Salesforce as an example).

For this sort of SaaS, NAPPS has to allow the SaaS's existing OAuth AS to issue the token ultimately used by the app on the API calls.

The NAPPS spec is evolving to dealing with such applications in almost exactly the same way as it does native applications that call on-prem APIs built by the enterprise.

Fundamentally, for both categories of native applications, the enterprise AS issues to the Token Agent an identity token JWT, this handed to the application through the mobile OS bindings. The app exchanges this JWT for the desired access token to be used on API calls - the only difference is the AS at which the JWT is exchanged.

Local native apps
  1. app requests tokens of TA, includes generated nonce
  2. TA uses its RT to send request + nonce to AS
  3. AS returns PoP JWT
  4. TA hands over PoP JWT to app
  5. App exchanges JWT, shows PoP
  6. AS returns token(s) to app
3rd party native apps 
  1. app requests tokens of TA, includes generated nonce
  2. TA uses its RT to send request + nonce to AS1
  3. AS1 returns PoP JWT, targeted at AS2
  4. TA hands over PoP JWT to app
  5. App exchanges PoP JWT against AS2, shows PoP
  6. AS2 returns token(s) to app
Step 5 in the 3rd party sequence implies a federated trust model - the SaaS AS2 must be able to trust & validate the JWT issued by the enterprise AS1.


The above model is attractive for the symmetry it provides between both application categories.

Kuppinger Cole04.11.2014: One identity for all: Successfully converging digital and physical access [Technorati links]

October 08, 2014 11:13 AM
In KuppingerCole

Imagine you could use just one card to access your company building and to authenticate to your computer. Imagine you had only one process for all access, instead of having to queue at the gate waiting for new cards to be issued and having to call the helpdesk because the system access you requested still isn’t granted. A system that integrates digital and physical access can make your authentication stronger and provide you with new options, by reusing the same card for all access...
more

Kuppinger ColeExecutive View: Okta Cloud IAM Platform - 70887 [Technorati links]

October 08, 2014 09:45 AM
In KuppingerCole

Both Cloud computing and Identity and Access Management (IAM) can trace their beginnings to the late 1990’s.

Cloud computing began as “web services” then developed into Software as a Service (SaaS) later expanding to cover areas such as Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) even, within the last couple of years, Identity as a Service (IDaaS) and...
more

October 07, 2014

Kuppinger ColeKuppingerCole Analysts' View on How Mature is your IAM Program [Technorati links]

October 07, 2014 09:59 PM
In KuppingerCole

Maturity Models come of age already. They themselves have matured considerably. Like many other methodologies they have followed a certain hype cycle characteristics and have by now arrived on a plateau of general acceptance. Nevertheless rating the maturity of IAM/IAG programs is not easy. Therefore some questions arise when a maturity model is to be employed. These questions are:

What is a maturity model? What kinds of MMs are in use? Which MMs are adapted to IAM / IAG? Why...
more

KatasoftUse JWT The Right Way! [Technorati links]

October 07, 2014 03:00 PM

JSON Web Token (JWT) is a useful standard becoming more prevalent, because it sends information that can be verified and trusted with a digital signature. In their most basic form, JWTs allow you to sign information (referred to as claims) with a signature and can be verified at a later time with a secret signing key. The spec is also designed with more advanced features that help against man-in-the-middle and replay attacks.

Why Are JWTs Important?

They handle some of the problems with information passed from a client to a server. JWT allows the server to verify the information contained in the JWT without necessarily storing state on the server. As a trend, we are seeing more and more SaaS products include JWT integrations as a feature or using JWT in their product directly. Stormpath has always followed secure best practices for JWTs, in several parts of our stack, so we want to share some best practices for using JWT the right way.

What Is A JWT?

Before we get started, let’s quickly look at what a JWT contains so we can clearly understand why these best practices are important. In its most simple form, JWT has three distinct parts that are URL encoded for transport:

The header and claims are JSON that are base64 encoded for transport. The header, claims, and signature are appended together with a period character .
For example, if the header and claims are:

//header
{
    "alg": "HS256", //denotes the algorithm (shorthand alg) used for the  signature is HMAC SHA-256
    "typ": "JWT" //denotes the type (shorthand typ) of token this is
}

//claims
{
    "sub": "tom@stormpath.com",
    "name": "Tom Abbott",
    "role": "user"
}

The JWT would be represented by this psuedocode:

var headers = base64URLencode(myHeaders);
var claims = base64URLencode(myClaims);
var payload = header + "." + claims;

var signature = base64URLencode(HMACSHA256(payload, secret));

var encodedJWT = payload + "." + signature;

That is JWT in a nutshell. The most important thing about JSON Web Tokens is that they are signed. This ensures the claims have not been tampered with when stored and passed between your service and another service. This is called verifying the signature. JWT has more advanced features for encryption, so if you need the information in the claims to be encrypted, this is possible using JSON Web Encryption.

How to Secure JWT

There are a lot of libraries out there that will help you create and verify JWT, but when using JWT’s there still some things that you can do to limit your security risk.

If you are using JWTs, I hope this information helps. If you have any questions, comments, or suggestions feel free to reach out to me by email or twitter.

Paul MadsenAs long as X is true ..... [Technorati links]

October 07, 2014 01:08 PM

When my Samsung Gear watch is within BLE range of my Samsung S5, I need not enter my screen unlock pattern in order to get into the phone. The S5 interprets the proximity of the Gear as a proxy for my own proximity, and so deduces that it is myself handling the phone and not somebody else. 

This is an example of what appears to be an emerging model for authentication, which I’ll give the pretentious name of ‘conditional session persistence’ and characterize as

‘As long as X is true, no need to Y’

where ‘X’ is some condition - the continued state of which protects the user from having to perform ‘Y’, generally some sort of explicit login operation.

For my Gear & S5 use case, the X condition is ‘the Gear is within BLE range of the S5’ and the ‘Y’ is ‘demonstrate knowledge of secret unlock pattern to access phone’.

This authentication model is appearing elsewhere.

Screen Shot 2014-10-06 at 12.47.17 PM.pngThe Nymi wristband records the user’s ECG and sends it to a companion app on a paired device for it to be compared to the previously recorded ECG pattern. If the biometric comparison is successful, then the companion application responds back to the Nymi that it should unlock a previously registered crypto key and use that key to authenticate to resources and services. To ‘authenticate’ to the Nymi the user must touch a finger of the other hand to the top of the wristband - this creates an electrical loop that allows the ECG to be recorded. Once recorded and successfully compared, the ECG is not measured again, at least not until the wristband is removed from the user’s wrist. As long as the wristband stays on the user’s wrist the Nymi remains willing to assert the user’s identity by presenting the key (or presumably separate keys for different resources). Once removed from the wrist, then the user is required to re-authenticate once more via their ECG.

The Apple Watch is reported to use the same model.

Screen Shot 2014-10-06 at 12.03.26 PM.pngOn the back of the case, a ceramic cover with sapphire lenses1 protects a specially designed sensor that uses infrared and visible-light LEDs and photodiodes to detect your heart rate.

Via the 4 sensors on the back, the Watch will be able to determine when it is removed from the wrist after an initial authentication (by PIN it seems but it’s not inconceivable that it uses the heart rate as a biometric?). As long as the Watch stays on the user’s wrist the original authentication remains valid and the Watch can be used to, for instance, buy over-priced coffees from hipster barristas.

What is novel in this new model is of course the ‘As long as X is true’ clause - some sort of continuous check of the user’s context that serves to better bind them to the original authentication.  

Contrast this new model with traditional web-based authentication, in which, after the user presents some password (inevitably derived from their favourite sports teams name) the authentication server sets some session cookie

‘As long as T seconds haven’t expired, no need to Y’

In this model, nothing binds the user to the authenticated browser session and so prevents somebody else from hijacking that session - (which if of course is why those who (perversely) login to their banks and other sensitive resources from public kiosks are reminded to sign out when done).

Even in this new model, there will be differences in the certainty with which the persistence of X can be determined - the Nymi and Apple Watch, because they more tightly bind the user to the authenticating device, would likely offer more assurance than the Samsung Gear (I can take the Gear off my wrist and the S5 will be oblivious).

Of course, the ‘As long as X’ condition is only viable if there are local sensors able to monitor the state of X - whether Bluetooth proximity, or skin contact, or heart rate measurement, or future buttock-to-sofa contact etc. 

But fortunately the things that we are more and more surrounding ourselves with, even if primarily intended for some other purpose (think light bulbs, thermostats, and garage doors), will provide those sensors and so the ability to monitor all the different X’s we can think up.

GluuKeeping the lights on (or off). [Technorati links]

October 07, 2014 12:50 AM

iot-light

Someone famous said “If you can switch a light, you can topple an empire.” I don’t have an empire, but it still got me wondering… after IOT happens, how will I control my lights?

People have a terrible track record managing the security of their digital IT infrastructure. Whether the hackers are bad guys trying to steal our identity, commercial entities observing our behavior, or government spies, we have very little assurance these days that our electronic transactions are secure or private. If you have any illusions about your security, watch this 20 minute video brought to my attention in a blog from Tozny.

So given the dire consequences of controlling the lights, where will I even administer security for my “house”? What device will I use? Apple to the rescue? Recently, Apple has embraced the idea of security as competitive advantage. Let’s imagine this use case: what if my TV has a “Movie Mode”, which calls API’s to dim the lights, lower the blinds and start the popcorn? Will I need an Apple TV, Apple lights, Apple blinds and an Apple popcorn maker? How do I ensure that my pranky eight year old can’t put my TV in movie mode when I’m on a conference call?

I guess one day there will be an iPhone or iPad app that lets me manage my policies. For example, TV with ID 12345 can control lights when user Mike is logged in from his house. The smart device makes sense, because it has a lot of great sensors that will make security stronger and more invisible. Apple has been patenting biometric authentication mechanisms, for example, Heartbeat authentication, that can identify you without your even knowing it.

However I fear that we will be a much poorer society if we only have security for Apple products–both in terms of diversity, and in terms of money in our savings account. I’m sure even people at Apple might acknowledge that Apple won’t be able to produce every electronic device in your IOT house. One of the exciting things about IOT is that it’s the first time individuals could perhaps compete with huge companies making innovative smart devices.

The hardest work in front of us is arriving at consensus. How do we get big companies, little companies, and open source enthusiasts to agree on standards that will enable centralized security? This is one of the reasons Gluu joined the Open Interconnect Consortium with Intel, Cisco and Samsung. It is also the reason we are directly participating in the User Managed Access Working Group, which is working on a standard that can be used for API access management. Other efforts are also underway, for example, the Zigbee Alliance announced new security standards that “provides consumers with the capability to directly control all devices in a home with one remote control.” Also the Allseen Alliance recently announced an initiative to advance smart lighting.

At Gluu, our mission is to leverage open standards, and open source software, just in case Apple doesn’t save us from Big Brother (or Apple turns out to be big brother!) For more information on the Gluu Server, the world’s first free open source access management platform, see our website at http://gluu.org

October 06, 2014

Kantara InitiativeKantara Initiative Director’s Corner [Technorati links]

October 06, 2014 08:27 PM

Dear Members and Community,

Welcome to the Kantara Initiative Director’s Corner newsletter for the 3rd quarter of 2014. In this installment we provide a recap of recent events, connections, news, and opportunities. We have some great news to share about our membership and we’ve participated in some great F2F and virtual events.

We’re pleased to present the Kantara Initiative IoT & Identity Strategic Series of workshops. This workshop series connects leaders, partners, and competitors alike to share information around the intersection of IoT, Identity, and Access Management. Positioned in the East and West coasts of the US and in the hub of Utrecht, Netherlands, the series of events has connected leaders to share use cases, successes, and strategies across industries and jurisdictions. This series has also focused on the economics, market building, and risk profiles associated with the Identity layer of IoT.

Thanks for connecting with us – Joni Brennan, Executive Director, Kantara Initiative

In this Issue

Spotlight: Kantara Board Member CA Technologies

“We have always believed that a collaborative and open environment for creating industry standards provides significant benefits to our customers, and therefore to us. The mission and membership of Kantara supports these goals of collaboration and innovation, and therefore we continue to strongly support and remain active in its activities.”

Event: Right to Be Forgotten – Savior or Unworkable Solution

September 30, 2014, Virtual Hangout

In today’s connected society, many people fear that personal data privacy is dead. Social media platforms such as: Facebook, Twitter, LinkedIn, Google+, etc. all require users to create online profiles and divulge personal information about themselves (accurate or not). Put social media aside; even simple internet users who have managed to avoid social media platforms are still vulnerable if they choose to use on-line services for shopping, email, or simply to browse. There is an underlying concept that users will eventually be logged in to some site at some time and this mere trend of ‘constant authentication’ aids in the tracking of users’ personal identifiable data and usage habits. So what can we do to defend ourselves against invasions of our privacy with regard to our digital identities?

In our last installment in this series on privacy, we touched upon this idea of data access control in the age of the Internet of Things but the European Union’s recent ruling on the ‘Right to Be Forgotten’ legislation is making media headlines due to the challenges in implementation and enforcement.

Joni Brennan (IEEE-SA Trust and Identity Technology Evangelist & Kantara Initiative, Executive Director), moderated a stellar panel including: Frédéric Donck (Director of the Internet Society’s European Regional Bureau), John C. Havens (Founder of The H(app)athon Project), and Steve Wilson (Vice President and Principal Analyst at Constellation Research).

Press: Kantara Initiative Awards ID.me CSP Trustmark Grant

September 11, 2014, Piscataway, NJ, USA

Kantara Initiative announced the Grant of Kantara Initiative Service Approval Trustmark to the ID.me Credential Service Provider (CSP) service operating at Level of Assurance 1, 2 and 3. ID.me was assessed against the Identity Assurance Framework – Service Assessment Criteria (IAF-SAC) as well as the Identity Credential Access Management (ICAM) Additional Criteria by Kantara Accredited Assessor Electrosoft.

A global organization, Kantara Initiative Accredits Assessors, Approves Credential and Component Service Providers (CSPs) at Levels of Assurance 1, 2 and 3 to issue and manage trusted credentials for ICAM and industry Trust Framework ecosystems. The broad and unique cross section of industry and multi-jurisdictional stakeholders, within the Kantara Membership, have the opportunity to develop new Trust Frameworks as well as to create profiles of the core Identity Assurance Framework for applicability to their communities of trust.

ISOC: Inter-Fed & Attributes Harmonization Workshops

September 2-4, 2014, Utrecht, Netherlands

Kantara Initiative Leadership and Members attended the Internet Society (ISOC) Inter-Federation and Attributes Management Harmonization workshop kindly hosted by SURFnet in Utrecht, Netherlands.

This activity was planned as a series of two workshops (one directly following the other) to bring together leaders and innovators of all types to present, discuss, and harmonize around the emerging areas of research and development of inter-federation (connection of one or more federations) and attributes management.

Inter-federation is our next frontier as innovators with in identity and access management standards, research, and education development. Leaders from around the world discussed inter-federation from the high-level view of “where are we now” to deeper level view on context setting and understanding as well as a focus toward more complex scenarios where Identity Assurance crosses national borders and concepts of Assurance of attributes.

Kantara Initiative: Access, Consent, & Control in the IoT

September 4-5, 2014, Utrecht, Netherlands

Kantara Initiative leaders and innovators gathered in Utrecht, Netherlands September 4th-5th.  In an event kindly hosted by SURFnet and sponsored by Forgerock, leaders from the Kantara Initiative IDentities of Things (IDoT), User Managed Access (UMA), and Consent and Information Sharing (CIS) Open Notice Groups presents 1.5 days of Identity and IoT innovation harmonization.

Areas of coverage included presentations of use cases and demos that focused on the Identity layer of IoT.  Specifically, the event addressed access control, notice, and consent with regard to contextual Identity systems.  Leaders discussed these topics ranging from user-centric to enterprise and industrial access. The event provided and excellent opportunity to connect with peers, partners, and competitors.

Kantara Initiative: IoT & Access Control Workshop

September 17, 2014, Mountain View, California, USA

In a world of increasing network connectivity that interacts with more and more active and passive sensors, data is generated, managed, and consumed in mass.  Industry experts will discuss findings regarding standardization of the IoT space and where possible gaps exist. This Kantara Initiative event focused to provide a review of use cases and demos as well as implications of identity and personal identifiable information (PII) within the IoT space.

There are many initiatives in the IoT space and knowing where to go can be a challenge.  Our goal for this event was to connect broad IoT facing experts with Identity & IoT experts. Kantara Initiative’s Identities of Things (IDoT) group is leading the way for the intersection of IoT and Identity. With this opportunity we connected connect IEEE communities with Identity communities through our Kantara workshop. We we’re proud to partner with the IEEE-SA as one of the leaders in standardization of IoT.  The Kantara Initiative event provided the perfect “warm-up” for the IEEE-SA event on the following days.

IEEE-SA IoT Workshop 2014

September 18-19, 2014, Mountain View, California, USA

The IEEE Standards Association (IEEE-SA) hosted an Internet of Things (IoT) Workshop, 18-19 September 2014 in Silicon Valley, Calif. During the two-day event, attendees explored the dynamics of the IoT markets and the convergence of platforms and services, with a special focus on the need for an even more interdisciplinary approach to the design of products and services for the IoT markets.

Kantara Initiative Leadership and Members participated in a panel to discuss the emerging landscape of issues that are at the intersection of IoT and Identity Management. Topics discussed included: concerns around management and security of personal data generated by IoT devices, how to secure IoT with regard to device and user identity, and benefits of converging communities to focus on interoperability standards and testing. Kantara Initiative Executive Director Joni Brennan moderated a panel with participants from: Allan Foster (ForgeRock), Ingo Friese (Deutche Telekom), and Mary Hodder (Independent).

Kantara Initiative: TSCP Identity and IoT Workshop

Today, having an Identity Management strategy is not only an IT need but rather it is a business priority. Identity and Identity Access Management is evolving and connecting to your customers, citizens, and partners means the difference between business as usual and business building innovation. Identity Relationship Management provides a common language for the evolution of identity as a driver of revenue. Building upon data context and the emerging Internet of Things, with respect for user control of data sharing, identity is now a powerful connection tool that fosters and supports relationships.

The group discussed high-level concepts around the modularity and adaptability that is needed to perform in today’s environment while maintaining appropriate access control. These concepts are internationally scoped as core business drivers moving beyond basic IT solutions and risk management. This event will focus on innovation of enterprise, small business, and governments’ services.

Nat Sakimuraドビッシー – Syrinxの謎 [Technorati links]

October 06, 2014 04:04 PM

1913年に書かれたSyrinx(シランクス)は、全てのフルート奏者のかけがいのないレパートリーであるといえるでしょう。フルート独奏のための知られた曲というのは、これ以前は1763年のC.P.E.バッハのイ短調ソナタまで遡らなければならないほど当時は珍しいものでした。そのような状況のもとで書かれたこの曲は、様々な感情を単旋律で豊かに表現し、その後のフルート音楽の可能性を大きく切り開いた転換点となりました。

SyrinxはGabriel Moureyの戯曲「Psyché」の劇中音楽として書かれました。劇の間、ステージの外で演奏されることを想定され、元々は「パンの笛」と題されて云々という話は、Wikipediaの記事を参照していただきましょう[1]。

さて、今日のトピックは、この曲にまつわる謎です。いくつかの謎が提示されています。

  1. 当初は小節線無しで書かれていたのを、マルセル・モイーズ[2a]が書き足して、これが現在に伝わっているという説がある。
  2. 献呈をうけたルイ・フルーリ[2b]が校訂したJOBERT版の楽譜では最後から二小節目のロ音にアクセントが書いてある(譜例1)が、これは本来ディミュニエンドであったことが、自筆の楽譜(譜例2)が発見されて分かった。

まず一つ目の謎ですが、これはマルセル・モイーズが語ったことが出所となっているようです。1991年に書かれた記事[3]によると、この曲はある午後のパーティーでモイーズの見ている前で、小さな彫刻のそばにあったピアノにドビッシーがつかつかと行って 書いたもので、その晩モイーズによって初演されたというのです。その楽譜には小節線もフレーズも何もなく、それらはあとからモイーズが書き足したとのこと。ところがその楽譜は、ドビッシーが周りの人に賞賛されている間に「楽譜を自分のものにしてしまうので有名なあるフルート奏者」のポケットの中に消えてしまって、そのフルート奏者が亡くなって未亡人が楽譜の処分のためにモイーズのところに持ち込むまで行方不明になっていたものだとのことです。

この説は批判が多く、ジャン・ピエール・ランパルは、これのことをモイーズの思い違いとして、冒頭で述べた戯曲の劇中音楽としてかかれ、Louis Fleuryに献呈され、初演されたと[4]の翌月に語っているようです。これが、フランスでの通説で、現在でもこちらが支持されているようです。そうすると、「小節線がなかった」というのは都市伝説ということになるようです。しかしながら、私は後述の理由によって、あながちこのストーリーは嘘では無いかもしれないと思っています。

2つ目の謎ですが、献呈をうけたルイ・フルーリが校訂したJOBERT版(1927)以来、世に出回っている多くの楽譜が実際アクセントになっています(譜例1)。

(譜例1)シランクスの最後の部分。終わりから2小節目の「ロ音」にアクセントがついている。

Debussy - Syrinx

(出所)Editions JOBERT [5]

実際、多くの演奏で、この音にアクセントをおいています。ジャン・ピエール・ランパルで聞いてみましょう。

ところが、1993年に発行された自筆の楽譜とされるもの(譜例2)では、これが明らかにディミュニエンドになっています。

(譜例2)自筆とされる楽譜のファクシミリ

(譜例2a)最後の部分の拡大

Debussy - Syrinx Handwritten

この「ロ音」のアクセントは、うとうととして意識が霞んでいくというテキストに合わず、謎の部分だったわけですが、これがディミュニエンドならば、まさにピッタリ来る感じです。この日曜に指摘され、譜例2を見て思わず「なるほど〜」と感心至極となりました。

音で聞くとこんな感じですね。

エマニュエル・パユ、上手いですねぇ。やっぱ、こうじゃなくちゃ!と思いきや…ことはそう単純ではないようです。譜例2(および譜例2a)には右ページの右下にドビッシーのサインらしきものが有ります。これが曲者なのです。ドビッシーの書簡などを見たことのある方なら、「あれ?」と思われるかもしれません。ドビッシーの自筆の署名はこんな感じです。

Debussy Autograph

(図1)ドビッシーの署名[6]

はい。全然違いますね。実際筆跡分析をすると、この楽譜はドビッシーの手になるものでも、献呈されたフルーリーのものでも無いようです[7]。

さて困りました。せっかく決め手になったと思われる「自筆手書き譜」が、偽物の可能性が出てきてしまったわけですから。

こうして、自筆譜が失われてしまったこの曲の真相は永遠に闇の中になってしまったわけですが、一筋の光明があるとすれば、それは、同時代人であるモイーズの演奏が残っていることでしょう。

この演奏だと、アクセントではなくディミュニエンドになっていますね…。しかし、こうなって来ると、なぜフルーリーがアクセントとして校閲したのかが問題になります。年をとって目が見えなかったとかあるのでしょうか?出版は1927年ですから、フルーリーは…あれ、もう亡くなってますね…。亡くなられたのが1926年ですから。ということは、モイーズが揶揄していたフルート奏者とはフルーリーのことだったのでしょうか?そもそも、モイーズの説明はとても具体的ですよね。ある午後のパーティーで小彫刻の横にあったピアノで作曲したとか…。はたしてわざわざそんなディテールまで、モイーズほどの人が嘘をつく必要があるか?謎は深まります。さらに、もしもあの「自筆譜」の「Claude Debussy」との「署名」は、単に誰作かをモイーズが書き留めたものだったら?モイーズの筆跡をぜひ見たくなってきますね[10]。秋の夜長にますます謎は深まっていきます[8]。

まぁ、ほんとうのところは分かりませんが、私としてはより曲想にあっているディミュニエンド説を採りたいと思います。

 

[1] シランクス http://ja.wikipedia.org/wiki/シランクス

[2a] Marcel Moyse (1889年5月17日 – 1984年11月1日) 非常に高名なフルート奏者。現代フルート奏法の父といわれる。 http://ja.wikipedia.org/wiki/マルセル・モイーズ

[2b] Louis Fleury (1878年 – 1926年)フランスのフルート奏者。

[3] Marcel Moyse recalled, “Debussy was asked to compose some music inspired by a statuette of a shepherd playing his pipe. On the afternoon of the party, Debussy strolled over to the piano adjacent to the statuette and rapidly wrote his little Syrinx. He handed me the manuscript to perform that evening. The composition lacked even a bar line or phrase marking; all markings on the manuscript were mine. The little work was almost lost to flutists when Debussy showed the manuscript to another flutist who was singularly adept in appropriating manuscript copies of flute works from the library. The manuscript conveniently found itself in the flutist’s coat pocket while Debussy was engaged in conversation with admirers. After the thieving flutist died, his widow, in need of money, called me to her assistance in disposing of the deceased’s collection of flute music. In the collection was, of course, the original and only copy of Syrinx.” Performance Guide: Interpreting Syrinx by Roy E. Ernst and Douglass M. Green. Flute Talk February 1991

[4] Flute Talk March 1991

[5] Debussy, C.: Syrinx, Editions JOBERT (1927, Renewed 1954)

[6] Fraser’s Autographs: Claude Debussy  ちなみに、2014/10/5段階で、£2250の値段がついています。

[7] http://www.flutetunes.com/tunes.php?id=163 もっとも、素人目には楽譜部分の筆跡はそれっぽいので、名前だけを後から誰かが書き入れたのかもしれない。ひょっとして、マルセル・モイーズが?

[8] もう一つ謎が有ります。Jane F. Fulcherの「Debussy and His World」P.138によると、フルーリーが生前出版した本に含まれる「シランクス」の断片は、われわれが今日知っている「シランクス」とは大幅に異なるらしいのです。ひょっとすると、フルーリーに献呈された劇中音楽「シランクス」と、現在われわれが「シランクス」として知っている曲「Psyché」は、別の曲なのかもしれません。

[9] 『ドビッシーと「シランクス」と自筆稿』は参考になります。http://www.ne.jp/asahi/jurassic/page/talk/pahud/syrinx.htm

[10] 1972年、80歳過ぎの時の筆跡は http://www3.tokai.or.jp/satou/flute/ にある。ドビッシーと違って、順方向に斜めっている筆跡で、その限りにおいては、(譜例2a)に合致する。

CourionUncle Sam Wants YOU to Take a Fresh Look at Your Identity & Access Management [Technorati links]

October 06, 2014 01:00 PM

Access Risk Management Blog | Courion

Kevin O'ConnorThe US Department of Homeland Security recently published a Public Service Announcement, “Increase in Insider Threat Cases Highlight Significant Risks to Business Networks and Proprietary Information”,  touching on multiple (and all too familiar) insider threat scenarios.

This announcement is the proverbial icing on the “insider threat” cake baked a long time ago.  Disgruntled employees, malfeasance, or general conduct unbecoming is nothing newUncle Sam.  What is new is recognition that bad actors act with alarming efficiency to siphon off value from companies.   These 21st century digital pickpockets of sensitive data and proprietary information are a new challenge with their blazing speed and seemingly invisible movement within their firms.

Of the ten recommendations offered to confront this issue, seven focus on Identity and Access Management (IAM) tasks.  First on the list is: Conduct a regular review of employee access and terminate any account that individuals do not need to perform their daily job responsibilities.

Clearly a prudent recommendation.  Yet, an elusive goal for even the most sophisticated IAM teams.  How do you align regular reviews with a continuously evolving threat?  How do you elevate existing risk management operating procedures without impeding your normal course of business?  With internal personnel moves occurring by the hour, how can you possibly ensure that the right people have the right access to the right information and are doing the right things with it?

The way forward must be a combination of strong IAM fundamentals coupled with innovative capability found in identity and access intelligence solutions such as Courion’s Access InsightTM. Access Insight helps firms redefine access management practices to take IAM beyond the traditional and into the exceptional.  For example, Harvard Pilgrim Health Care uses Access Insight to document exactly who is accessing PHI in order to streamline and enhance their audit readiness to federal HIPPA regulations.

Access Insight InnovationsTasks previously impractical to pursue are now within reach when you leverage Access Insight’s big data framework.  This actionable intelligence enables you to close the access risk process gap inherent in traditional IAM models.  For example, Universal American uses Access Insight to analyze and compare current user access behavior to historical norms in near real-time to spot unusual behavior and trigger actionable alerts.

Allow us to speak with you about our proven approaches to reduce access risk.  Ask about how our Access Risk Quick Scan offer can help you uncover your organization’s access risk – in just hours.  Check us out and let us help you take your IAM beyond the traditional to the truly exceptional.

blog.courion.com

Vittorio Bertocci - MicrosoftApps as Organisms [Technorati links]

October 06, 2014 07:24 AM

image

This afternoon I was absentmindedly fiddling with one of our fossil plates, when my thoughts went to the upcoming Azure AD dev session I am scheduled to deliver in just few weeks at TechEd Europe.
I have a lot of ground to cover: ideally I would like to talk about traditional web apps, single page apps, web API and native clients on multiple devices. That truly is a lot of stuff!

Although in the last few years we made great strides in simplifying things, identity remains a hard to digest topic for developers. In all my sessions I always try to come out with a story that stitches all the sub-topic together, so that people can see past the low level & operational details and understand the big picture. As we add more tricks to our bag and the scope of what we can do expands, it’s been harder and harder to come out with a narrative or a metaphor that nicely ties so many disparate things together.

As I was so musing, the ancient shells encased in the fossil matrix inspired a sudden thought:
what if we’d think of applications as living organisms?

Organisms mutated body plans to colonize new environments and spread further – from the seas through swamps to land, even the sky. Similarly, through the years new application architectures emerged to take advantage of changing conditions and new vectors to colonize – the rise of more and more capable web browsers, and the extraordinary explosion of smart devices being prime examples.

Aha, I feel we’re onto something here! I don’t need the metaphor to be a perfect fit, I just require a narrative backbone to help the audience to build scaffolds – scaffolds on which to store the avalanche of information I’ll pour on them in the 75 minutes of the session. Let’s refine this further by defining the mappings between the domains of organisms evolution and apps architectures history.

With the above as guide, the metaphor rolls out pretty nicely. You can follow the progression with the sketch I pasted at the beginning of the blog.

  1. Fish – We start our journey with the classic roundtrip-based web application. It is indeed the simplest! The entire interaction pattern is based on the repetition of a browser requesting a resource, served a backend that takes care of both business logic (blue B) processing and presentation generation (red P). Authentication is based on establishing a session, typically maintained by a cookie. Here is where I tell you about ws-fed and openid connect sign in, how AAD represents web apps, and how to leverage the OWIN middleware to secure your roundtrip based apps.
  2. Amphibious – Here we introduce the case in which traditional roundtrip apps take advantage of the programmable web and call the business logic of other apps – via the API the expose. Those server to server API calls are secured via OAuth2 and tokens. Here it’s where I can introduce OAuth2 clients, web APIs and permissions – and how to implement that in AAD with ADAL and the Web API OWIN middleware implementing OAuth2.
  3. Land animal – At this point we can shed the roundtrip legacy altogether – moving the presentation layer directly in the browser, introducing the single page app paradigm. The security here gets interesting, with new OAuth flows and the disappearance of cookies – from the implementation perspective I can’t give more details at this point, but let’s just say that there will brand new stuff to see Smile
  4. Winged animal – native apps. This is a truly new frontier, where the app breaks free of the boundaries of the browser and learns to thrive as a native process on the many client platforms available today. This is the good ol’ ADAL… or is it? Maybe there are news coming here as well! Winking smile

Well, that’s it. Writing this post helped me to put the idea in better focus – but I would really love to hear feedback from you folks. Let me know if that’s a story that you’d like me to tell, or if you’d prefer another angle… we still have one week before I’ll have to make a decision. Thanks in advance!

October 05, 2014

Anil JohnDo the Majority of Public Sector Digital Services Need Credentials? [Technorati links]

October 05, 2014 05:45 PM

An argument for the lack of easily available high assurance credentials is that, given a lack of transaction volume, private sector providers may not find it economically interesting to invest in creating high assurance credentials just for the public sector.

So how about a potentially heretical suggestion? Don’t require credentials to access the application.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.


These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

Kevin MarksHow did Twitter become the hate speech wing of the free speech party? [Technorati links]

October 05, 2014 07:32 AM

Twitter has changed. When it started out it was a haven from the unread count in your email inbox, from the tension of blogging and comments. You didn't have to write long form essays, you didn't have to moderate commenters. You didn't have to make all those bold lines go away to reach inbox zero. You just followed some people and typed thoughts that occurred to you. Those who followed you got texted them, or sent them in IM. Simple.

Then, we came up with @ replies. They were a way of making it clear who you were responding to. Twitter sensibly adopted them, but they were off to one side. You had to click on the @ tab to see them. Your primary experience was still the people who you followed, which you controlled. I wrote about Twitter theory with this in mind.

Twitter created permeable, overlapping publics, and championed free speech and anonymity. They said they were the "free speech wing of the free speech party".

Twitter reinforced this by hiding tweets that start with an @ unless you followed the person being talked to. This reduced serendipitous friend discovery, but also damped down arguments. (You can still see this in twitter analytics: note the difference in reach versus engagement for posts versus replies).

However, Twitter had a perceived problem - this very contained, comfortable nature meant that it took effort to get started and find people to follow. They worked really hard at changing this, building a sign up process that made it very hard not to follow lots of people, especially famous ones. Also, they changed the way we got notified.

Emails and app notifications of new followers and @ replies were set up to drive engagement, encouraging you to return.  That @ tab got an unread count on it, just like the email inbox, and the app got a red number on it on iPhones.

The problem with this is that responses do not follow a smooth distribution. Sure, most tweets get no responses, but some take off. Hashtags became another way to spread tweets sideways, beyond the follow graph.

Twitter saw this as increased engagement, and most of the time it was good. They built special tools in for "verified" users—the celebrities and brands they used to woo the rest of us with on sign up. The Verified get to damp down the notification flood, or just see other verified people.

The problem is that by making @ replies the most visible part of the app, they'd brought us back to email and blog comments again.

Your tweet could win the fame lottery, and everyone on the Internet who thinks you are wrong could tell you about it. Or one of the "verified" could call you out to be the tribute for your community and fight in their Hunger Games.

Say something about feminism or race, or sea lions and you'd find yourself inundated by the same trite responses from multitudes. Complain about it, and they turn nasty, abusing you, calling in their friends to join in. Your phone becomes useless under the weight of notifications; you can't see your friends support amongst the flood.

Twitter has become the hate speech wing of the free speech party.

The limited tools available - blocking, muting, going private - do not match well with these floods. Twitter's abuse reporting form takes far longer than a tweet, and is explicitly ignored if friends try to help.

This is where we are now. There are new attempts to remake following-focused semi-permeable publics. Known, ello, Quirell are some. In the indieweb world we are just starting to connect sites together with webmentions, and we need to consider this history as we do.

Also published on my own site

Also published on Known

Also published on ello

Also published on Medium