The Beautiful, Tormented Machine

Web Name: The Beautiful, Tormented Machine

WebSite: http://manu.sporny.org

ID:195528

Keywords:

Beautiful,The,Machine,

Description:

Challenges In Building a Sustainable Web Platform by Manu Sporny and Dave LongleyThere is a general API pattern that is emerging at the World Wide Web Consortium (W3C) where the browser mediates the sharing of information between two different websites. For the purposes of this blog post, let’s call this pattern the “Web Handler” pattern. Web Handlers are popping up in places like the Web Payments Working Group, Verifiable Credentials Working Group, and Social Web Working Group. The problem is that these groups are not coordinating their solutions and are instead reinventing bits of the Web Platform in ways that are not reusable. This lack of coordination will most likely be bad for the Web.This blog post is about drawing attention to this growing problem and suggesting a way to address it that doesn’t harm the long term health of the Web Platform.The Web Payments PolyfillDigital Bazaar recently announced a polyfill for the Payment Request and Payment Handler API. It enables two things to happen.The first thing it enables is a “digital wallet” website, called a payment handler, that helps you manage payment interactions. You store your credit cards securely on the site and then provide the card information to merchants when you want to do a purchase. The second thing it does is enable a merchant to collect information from you during a purchase, such as credit card information, billing address, shipping address, email address, and phone number.When a merchant asks you to pay for something, you typically:select the card you want to use,select a shipping address (for physical goods), andsend the information to the merchant.Here is a video demo of the Web Payments polyfill in action:It’s one thing to mock up something that looks like it works, but implementations must demonstrate their level of conformance by passing tests from a W3C test suite. What you will find below is the current state of how Digital Bazaar’s polyfill does against the official W3C test suite: As you can see, it passes 702 out of 832 tests and we don’t see any barriers to getting very close to passing 100% in the months to come.The other thing that’s important for a polyfill is ensuring that it works in a wide variety of browsers including Google Chrome, Apple Safari, Mozilla Firefox, and Internet Explorer. Here is the current state, where the blue shows native support in Google Chrome on Android, with the Polyfill making up even more of the support in the rest of the browsers: The great news here is that this solution is compatible with roughly 3.3 billion browsers today, which is around 85% of the people using the Web. In order to take advantage of this opportunity, just like the native implementations, the polyfill has to be deployed by merchants and payment app providers on their websites.Credential Handler PolyfillDigital Bazaar also recently announced that they have created an experimental polyfill for the Verifiable Credentials work. This polyfill enables a “digital wallet” website, called a credential handler, to help you manage your verifiable credential interactions. This feature enables websites to ask you for things like your shipping address, proof of age, professional qualifications, driver’s license, and other sorts of 3rd-party attested information that you may keep in your wallet. Here is a video demo of the Credential Handler polyfill in action:Like the Web Payments polyfill, this solution is compatible with roughly 3.3 billion existing browsers, which is around 85% of the people using the Web today. Again, this doesn t mean that 3.3 billion people are using it today, the polyfill still has to be deployed on issuer, verifier, and digital wallet websites to make that a reality. General Problem StatementThere is a common Web Handler pattern evident in the above implementations that is likely to be repeated for sharing social information (friends) and data (such as media files) in the coming years. At this point, the general pattern is starting to become clear:There is a website, called a Web Handler, that manages requests for information.There is a website, called the Relying Party, that requests information from you.There is a process, called Web Handler Registration, where the Web Handler asks for authorization to handle specific types of requests for you and you authorize it to do so.There is a process, called Web Handler Request, where the Relying Party asks you to provide a specific piece of information, and the Browser asks you to select from a list of options (associated with Web Handlers) that are capable of providing the information.There is a feature that enables a Web Handler to optionally open a task-specific or contextual window to provide a user interface.There is a process, called Web Handler Processing, that generates an information response that is then approved by you and sent to the Relying Party via the Browser.If this sounds like the Web Intents API, or Web Share API, that’s because they also fit this general pattern. There is a good write up of the reason the Web Intents API failed to gain traction. I won’t go into the details here, but the takeaway is that Web Intents did not fail because it wasn’t an important problem that needed to be solved, but rather because we needed more data, implementations, and use cases around the problem before we could make progress. We needed to identify the proper browser primitives before we could make progress. Hubris also played a part in its downfall.Fundamentally, we didn’t have a general pattern or the right composable components identified when Web Intents failed in 2015, but we do now with the advent of Web Payments and Verifiable Credentials.Two Years of the Web Payments WGFor those of you that have seen the Payment Request API, you may be wondering what happened to composability over the first two years of the Working Groups existence. Some of us did try to solve the Web Payments use cases using simpler, more composable primitives. There is a write up on why convincing the Web Payments WG to design modular components for e-commerce failed. We had originally wanted at least two layers; one layer to request payment (and that’s all it did), and another layer to provide Checkout API functionality (such as billing and shipping address collection). These two layers could be composed together, but in hindsight, even that was probably not the right level of abstraction. In the end, it didn’t matter as it became clear that the browser manufacturers wanted to execute upon a fairly monolithic design. Fast forward to today and that’s what we have.Payment Request: A Monolithic APIWhen we had tried to convince the browser vendors that they were choosing a problematic approach, we were concerned that Payment Request would become a monolithic API. We were concerned that the API would bundle too many responsibilities into the same API in such a highly specialized way such that it couldn’t be reused in other parts of the Web Platform.Our argument was that if the API design did not create core reusable primitives, it would not allow for it to be slotted in easily amongst other Web Platform features, maintaining a separation of concerns. It would instead create a barrier between the space where Web developers could compose primitives in their own creative ways and a new space where they must ask browser vendors for improvements because of a lack of control and extensibility via existing Web Platform features. We were therefore concerned that an increasing number of requests would be made to add functionality into the API where said functionality already exists or could exist in a core primitive elsewhere in the Web Platform.Now that we have implemented the Payment Request API and pass 702 out of 832 tests, we truly understand what it takes to implement and program to the specification. We are also convinced that some of our concerns about the API have been realized. Payment Request is a monolithic API that confuses responsibilities and is so specialized that it can only ever be used for a very narrow set of use cases. To be clear, this doesn’t mean that Payment Request isn’t useful. It still is a sizeable step forward for the Web, even if it isn’t very composable.This lack of composability will most likely harm its long term adoption and it will eventually be replaced by something more composable, just like AppCache was replaced by Service Workers, and how XMLHttpRequest is being replaced by Fetch. While developers love these new features, browsers will forever have the dead code of AppCache and XMLHttpRequest rotting in their code bases.Ignoring Web Handlers at our PerilWe now know that there is a general pattern emerging among the Payments, Verifiable Credentials, and Social Web work:Relying Party: Request InformationBrowser: Select Web HandlerWeb Handler: Select and Deliver Information to Relying PartyWe know, through implementation work we described above, that the code and data formats look very similar. We also know that there are other W3C Working Groups that are grappling with these cross-origin user-centric data sharing use cases.If each of these Working Groups does what the Payment Request API does, we’ll expend three times the effort to create highly specific APIs that are only useful for the narrow set of use cases each Working Group has decided to work on. Compare this to expending far less effort to create the Web Handler API, with appropriate extension points, which would be able to address many more use cases than just Payments.Components for Web HandlersThere are really only four composable components that we would have to create to solve the generalized Web Handler problem:Permissions that a user can grant to a website to let them manage information and perform actions for the user (payments, verifiable credentials, friends, media, etc.).A set of APIs for the Web Handler to register contextual hints that will be displayed by the browser when performing Web Handler selection.A set of APIs for Relying Parties to use when requesting information from the user.A task-specific or contextual window the Web Handler can open to present a user interface if necessary.The W3C Process makes it difficult for Working Groups chartered to work on a more specific problem, like the Web Payments WG, to work at this level of abstraction. However, there is hope as Service Workers and Fetch do exist. Other Working Groups at W3C have successfully created composable APIs for the Web and the Web Payments work should not be an exception to the rule.ConclusionIt should be illuminating that both the Web Payments API and the Credential Handler API were able to achieve 85% browser compatibility for 3.3 billion people without needing any new features from the browser. So, why are we spending so much time creating specifications for native code in the browser for something that doesn’t need a lot of native code in the browser?The polyfill implementations reuse existing primitives like Service Worker, iframes, and postMessage. It is true that some parts of the security model and experience, such as the UI that a person uses to select the Web Handler, registration, and permission management would be best handled by native browser code, but the majority of the other functionality does not need native browser code. We were able to achieve a complete implementation of Payment Request and Payment Handler because there were existing composable APIs in the Web Platform that had nothing to do Web Payments, and that’s pretty neat.When Web Intents failed, the pendulum swung far too aggressively in the other direction of highly specialized and focused APIs for the Web Platform. Overspecialization fundamentally harms innovation in the Web Platform as it creates unnecessarily restrictive environments for Web Developers and causes duplication of effort. For example, due to the design of the Payment Request API, merchants unnecessarily lose a significant amount of control over their checkout process. This is the danger of this new overspecialized API focus at W3C. It’s more work for a less flexible Web Platform.The right thing to do for the Web Platform is to acknowledge this Web Handler pattern and build an API that fits the pattern, not merely charge ahead with what we have in Payment Request. However, one should be under no illusion that the Web Payments WG will drastically change its course as that would kick off an existential crisis in the group. If we’ve learned anything about W3C Working Groups over the past decade, it is that the larger they are, the less introspective and less likely they are to question their existence.Whatever path the Web Payments Working Group chooses, the Web will get a neat new set of features around Payments, and that has exciting ramifications for the future of the Web Platform. Let’s just hope that future work can be reconfigured on top of lower-level primitives so that this trend of overspecialized APIs doesn’t continue, as that would have dire consequences for the future of the Web Platform. There is a general API pattern that is emerging at the World Wide Web Consortium (W3C) where the browser mediates the sharing of information between two different websites. For the purposes of this blog post, let’s call this pattern the “Web Handler” pattern. Web Handlers are popping up in places like the Web Payments Working Group, Verifiable Credentials Working Group, and Social Web Working Group. The problem is that these groups are not coordinating their solutions and are instead reinventing bits of the Web Platform in ways that are not reusable. This lack of coordination will most likely be bad for the Web. The first thing it enables is a “digital wallet” website, called a payment handler, that helps you manage payment interactions. You store your credit cards securely on the site and then provide the card information to merchants when you want to do a purchase. The second thing it does is enable a merchant to collect information from you during a purchase, such as credit card information, billing address, shipping address, email address, and phone number. The other thing that’s important for a polyfill is ensuring that it works in a wide variety of browsers including Google Chrome, Apple Safari, Mozilla Firefox, and Internet Explorer. Here is the current state, where the blue shows native support in Google Chrome on Android, with the Polyfill making up even more of the support in the rest of the browsers: The great news here is that this solution is compatible with roughly 3.3 billion browsers today, which is around 85% of the people using the Web. In order to take advantage of this opportunity, just like the native implementations, the polyfill has to be deployed by merchants and payment app providers on their websites. Credential Handler Polyfill Digital Bazaar also recently announced that they have created an experimental polyfill for the Verifiable Credentials work. This polyfill enables a “digital wallet” website, called a credential handler, to help you manage your verifiable credential interactions. This feature enables websites to ask you for things like your shipping address, proof of age, professional qualifications, driver’s license, and other sorts of 3rd-party attested information that you may keep in your wallet. Here is a video demo of the Credential Handler polyfill in action: Like the Web Payments polyfill, this solution is compatible with roughly 3.3 billion existing browsers, which is around 85% of the people using the Web today. Again, this doesn't mean that 3.3 billion people are using it today, the polyfill still has to be deployed on issuer, verifier, and digital wallet websites to make that a reality. For those of you that have seen the Payment Request API, you may be wondering what happened to composability over the first two years of the Working Groups existence. Some of us did try to solve the Web Payments use cases using simpler, more composable primitives. There is a write up on why convincing the Web Payments WG to design modular components for e-commerce failed. We had originally wanted at least two layers; one layer to request payment (and that’s all it did), and another layer to provide Checkout API functionality (such as billing and shipping address collection). These two layers could be composed together, but in hindsight, even that was probably not the right level of abstraction. In the end, it didn’t matter as it became clear that the browser manufacturers wanted to execute upon a fairly monolithic design. Fast forward to today and that’s what we have. When we had tried to convince the browser vendors that they were choosing a problematic approach, we were concerned that Payment Request would become a monolithic API. We were concerned that the API would bundle too many responsibilities into the same API in such a highly specialized way such that it couldn’t be reused in other parts of the Web Platform. Now that we have implemented the Payment Request API and pass 702 out of 832 tests, we truly understand what it takes to implement and program to the specification. We are also convinced that some of our concerns about the API have been realized. Payment Request is a monolithic API that confuses responsibilities and is so specialized that it can only ever be used for a very narrow set of use cases. To be clear, this doesn’t mean that Payment Request isn’t useful. It still is a sizeable step forward for the Web, even if it isn’t very composable. This lack of composability will most likely harm its long term adoption and it will eventually be replaced by something more composable, just like AppCache was replaced by Service Workers, and how XMLHttpRequest is being replaced by Fetch. While developers love these new features, browsers will forever have the dead code of AppCache and XMLHttpRequest rotting in their code bases. If each of these Working Groups does what the Payment Request API does, we’ll expend three times the effort to create highly specific APIs that are only useful for the narrow set of use cases each Working Group has decided to work on. Compare this to expending far less effort to create the Web Handler API, with appropriate extension points, which would be able to address many more use cases than just Payments. The W3C Process makes it difficult for Working Groups chartered to work on a more specific problem, like the Web Payments WG, to work at this level of abstraction. However, there is hope as Service Workers and Fetch do exist. Other Working Groups at W3C have successfully created composable APIs for the Web and the Web Payments work should not be an exception to the rule. It should be illuminating that both the Web Payments API and the Credential Handler API were able to achieve 85% browser compatibility for 3.3 billion people without needing any new features from the browser. So, why are we spending so much time creating specifications for native code in the browser for something that doesn’t need a lot of native code in the browser? The polyfill implementations reuse existing primitives like Service Worker, iframes, and postMessage. It is true that some parts of the security model and experience, such as the UI that a person uses to select the Web Handler, registration, and permission management would be best handled by native browser code, but the majority of the other functionality does not need native browser code. We were able to achieve a complete implementation of Payment Request and Payment Handler because there were existing composable APIs in the Web Platform that had nothing to do Web Payments, and that’s pretty neat. When Web Intents failed, the pendulum swung far too aggressively in the other direction of highly specialized and focused APIs for the Web Platform. Overspecialization fundamentally harms innovation in the Web Platform as it creates unnecessarily restrictive environments for Web Developers and causes duplication of effort. For example, due to the design of the Payment Request API, merchants unnecessarily lose a significant amount of control over their checkout process. This is the danger of this new overspecialized API focus at W3C. It’s more work for a less flexible Web Platform. The right thing to do for the Web Platform is to acknowledge this Web Handler pattern and build an API that fits the pattern, not merely charge ahead with what we have in Payment Request. However, one should be under no illusion that the Web Payments WG will drastically change its course as that would kick off an existential crisis in the group. If we’ve learned anything about W3C Working Groups over the past decade, it is that the larger they are, the less introspective and less likely they are to question their existence. Whatever path the Web Payments Working Group chooses, the Web will get a neat new set of features around Payments, and that has exciting ramifications for the future of the Web Platform. Let’s just hope that future work can be reconfigured on top of lower-level primitives so that this trend of overspecialized APIs doesn’t continue, as that would have dire consequences for the future of the Web Platform. -- by @manusporny on October 31, 2017 at 10:08 pm Summary: The World Wide Web Consortium members standardize technology for the next generation Web. It s arguable that the way this process works does not provide a clear path to progress work and favors large organizations. This blog post explains why we ended up here and how we could make the process more fair and predictable.Since the early 1990s, the World Wide Web Consortium (W3C) has been the organization where most of the next generation Web has been standardized. It is where many of the important decisions around HTML, CSS, and Browser APIs are made. It is also where many emerging non-browser related technologies like Linked Data, Blockchain, Automotive Web, and Web of Things are incubated.The consortium has always tried to balance the needs of large corporations with the needs of the general public to varying degrees of success. More recently, a number of the smaller organizations at W3C have noted how arduous the process of starting new work at W3C has become while the behavior of large organizations continues to cause heartburn. As a result, some of us are concerned that the process is going to further tilt the playing field toward large organizations and unnecessarily slow the rate of innovation at the heart of the Web. In this blog post I m going to try to explain how creating new technology for the core of the Web works. I ll also suggest some changes that W3C could make to make the process more predictable and hopefully more fair to all participants.From There to HereFor most of its existence, the W3C has had a process for developing next generation technology for the Web Platform that goes something like this:It takes anywhere from 3-6 years for a technology to get through the W3C process. The formation and operation of a Working Group tends to be where the costs are fairly high in terms of W3C staff resources, which means the cost of failure is also relatively high for the organization as a whole. This has led to changes to the process that have raised the bar, in a negative way, on what is necessary to start a Working Group. The playing field has been tilted to the point that some of us are concerned that a handful of large organizations now have a tremendous amount of influence on where the Web is going while large coalitions of smaller organizations or the general public struggle to have their concerns addressed in the Web Platform.Recently, W3C has undergone a reorganization to make it more responsive to the needs of the organizations and people that use the Web. The reorganization splits the organization into multiple functional groups: Strategy, Project Management, Architecture and Technology, Global Participation, Member Satisfaction, Business Development, etc. While the reorganization feels like a move in the right direction, it doesn t seem to address how difficult it is to start and shepherd work through the W3C. Let s analyze why that is.Why Shepherding Work Through the W3C is OnerousIt is claimed that one of the primary bottlenecks at W3C when it comes to starting a new Working Group is allocating W3C staff member time to the project. The argument goes that there are just not enough W3C staffers for the workload of the organization. This is because W3C exists on a thin gruel of funding from membership fees (as well as other disparate sources) and hiring W3C staff to help support new work is financially burdensome. We have limited resources, so we must only pursue work that has the very highest likelihood of success. This means that parts of the Web Platform ecosystem develop far more slowly than they should.This dynamic has prompted a number of the larger organizations at W3C to push back on new charters citing this bottleneck. This is a misguided strategy. We shouldn t be focused on eliminating as much risk as possible, we should be focused on reducing the cost of making mistakes.As a result of the focus on eliminating as much risk of failure as possible, some of these large organizations have started requesting new requirements for starting work. Some examples include requirements like complete technical specifications, significant deployments to customers, and large vendor support. These new requirements are intended to increase the likelihood of success and slow the rate of new work at W3C. While slowing the rate of new work may address the staffing issue, it also creates a situation where the W3C can’t keep up with the required amount of standardization needed for the Web Platform to stay relevant. It also creates a bias in favor of large organizations.We have seen these new requirements ignored when it suits a large organization to do so (e.g. Web Payments and Web Application Security). If a large organization can see how their company financially benefits from an added feature to the Web Platform they seem to have a much easier time getting work started at the W3C as the starting new work requirements are not applied to them in the same way as they are to the smaller organizations at W3C.While I won t go into all of them here, there are more double standards, no pun intended, at play when transitioning work to a Working Group at W3C. This dynamic results in significant frustration from the smaller organizations at W3C when the goal posts for starting new work keep changing based on the current desires of the larger organizations.Regardless of whether or not this is a staffing issue, a technology maturity issue, or one of the other new requirements at play at W3C, one thing is certain. The list of requirements for transitioning work from an experimental technology to a Working Group at W3C is not clear and seems to favor larger organizations. It is a source of extreme frustration for those of us that are trying to help build the next generation Web and are NOT working for a multi-billion dollar multinational corporation.The W3C Working Group Formation ChecklistOne remedy for the aforementioned problems is to employ a simple but detailed checklist. This is one of the tools that is currently missing for W3C members.Some have argued that this lack of a clear checklist is by design. Some argue that the formation of every new Working Group is unique and requires discussion and debate. Having participated in starting multiple groups at W3C over the past decade, I disagree. The details differ, but the general topics that are debated remain fairly consistent. The data that you need to support the creation of a Working Group tend to be the same. Here is the checklist that I ve put together over the years after much trial and error at W3C. The purpose of the checklist is to gather data so that your community can prove that it has done its due diligence to the W3C membership when requesting the creation of a Working Group.Create an executive summary for W3C member companies. Summarize the work that the group has done. At this point you have far more content than W3C member company representatives have the time to read to determine if they want to join the initiative. Ease their decision making process by providing a summary. Example: Verifiable Claims Executive Summary for W3C MembersI don t expect much in the list above to be controversial. There are templates that the W3C membership should create for the surveys and reports above. Having this W3C Working Group Formation Checklist available will help organizations navigate the often confusing W3C Process.Reducing Reliance on W3C StaffThe checklist above makes the process of going from an experimental technology to a W3C Working Group more clear. The checklist, however, does not alleviate the W3C staffing shortage. In fact, if the way W3C staff is utilized does not also change, it might make the current situation worse.The W3C staff play a very important role in that they help organizations navigate how to get things done via the W3C Process. They help build consensus before and during a Working Group activity at W3C. This can be a double-edged sword. If there is a W3C staff member available to help, they can be a fantastic champion for a group s work. If there isn t, and you ve never progressed work yourself through W3C, you will find yourself in the unfavorable position of not knowing how to proceed. Many of the staff members also have their own way of navigating the W3C process and many of them have to repeat themselves when new work is started. In short, there is a lot of engineering churn and tribal knowledge at W3C; this is true of most standardization bodies.The unfortunate truth is that building out the Web platform is currently being restricted due to the way that we start and progress new work at W3C. The W3C membership relies on the W3C staff to its detriment. The W3C staff are incredibly helpful, but there aren t enough of them to support building out all of the new functionality needed for the Web, and this is ultimately bad for the Web.In order to address the staffing shortage, it is proposed that we shift much of their work from executing upon the proposed checklist, which is more or less what they do today, to verifying that the checklists are being processed correctly. This is effectively a quality control check on the work that groups are doing as they work their way through the proposed checklist. This offloads a significant chunk of W3C staff time to the groups that want to see their work get traction while giving those groups clear goals to achieve.Blocking Work at W3CW3C members currently have the ability to stop work that has ticked all of the necessary boxes in the checklist mentioned above. At present, large organizations tend to not become involved with work until a vote for the Working Group is circulated by W3C staff. Some members then suggest that they may vote against the work if their viewpoint isn t taken into account even though they have not participated in the work to date. Some call this a part of the process, but it is uselessly frustrating to those that have spent years building consensus around a Working Group proposal only to have a large W3C member company respond with a Why wasn t I consulted retort.So, now for the controversial bit:A W3C Community Group s work should automatically transition to a W3C Working Group if a significant coalition of companies, say around 35 of them, have ticked all of the boxes in a predefined W3C Working Group Formation checklist. This checklist should include the creation of two interoperable implementations and a test suite. In other words, it should meet all of the minimum bars for a Working Group s success per the W3C Process. Making this change achieves the following:Each step above would have document templates that W3C members can use as a starting point. None of the steps above require W3C staff resources and if all steps are completed, the formation of a Working Group is the natural outcome and should not be blocked by the membership.This checklist approach may be seen as too constraining for some, and that s why it s voluntary. Some organizations may feel that they do not need to produce all of the work above to get a Working Group and for those organizations, they can choose to ignore the checklist. Initiatives not using the checklist should expect push back if they choose to not answer some of the questions that the checklist covers.Rebalancing How the Web is BuiltThe current process for developing next generation standards for the Web is too unpredictable and too constrained by limited W3C resources. There are groups of small organizations that want to help create these next generation standards; we have to empower those groups with a clear path to standardization. We must not allow organizations to block work if the champions of the work have met the requirements in the checklist. Doing this will free up W3C staffing resources to ensure that the Web Platform is advancing at a natural and rapid rate.A W3C Working Group formation checklist would help make the process more predictable, require less W3C staff time to execute, and provide a smoother path from the inception of an idea, to implementation, to standardization of a technology for the Web platform. This would be great for the Web. Summary: The World Wide Web Consortium members standardize technology for the next generation Web. It's arguable that the way this process works does not provide a clear path to progress work and favors large organizations. This blog post explains why we ended up here and how we could make the process more fair and predictable.Since the early 1990s, the World Wide Web Consortium (W3C) has been the organization where most of the next generation Web has been standardized. It is where many of the important decisions around HTML, CSS, and Browser APIs are made. It is also where many emerging non-browser related technologies like Linked Data, Blockchain, Automotive Web, and Web of Things are incubated.The consortium has always tried to balance the needs of large corporations with the needs of the general public to varying degrees of success. More recently, a number of the smaller organizations at W3C have noted how arduous the process of starting new work at W3C has become while the behavior of large organizations continues to cause heartburn. As a result, some of us are concerned that the process is going to further tilt the playing field toward large organizations and unnecessarily slow the rate of innovation at the heart of the Web. In this blog post I'm going to try to explain how creating new technology for the core of the Web works. I'll also suggest some changes that W3C could make to make the process more predictable and hopefully more fair to all participants.From There to HereFor most of its existence, the W3C has had a process for developing next generation technology for the Web Platform that goes something like this:It takes anywhere from 3-6 years for a technology to get through the W3C process. The formation and operation of a Working Group tends to be where the costs are fairly high in terms of W3C staff resources, which means the cost of failure is also relatively high for the organization as a whole. This has led to changes to the process that have raised the bar, in a negative way, on what is necessary to start a Working Group. The playing field has been tilted to the point that some of us are concerned that a handful of large organizations now have a tremendous amount of influence on where the Web is going while large coalitions of smaller organizations or the general public struggle to have their concerns addressed in the Web Platform.Recently, W3C has undergone a reorganization to make it more responsive to the needs of the organizations and people that use the Web. The reorganization splits the organization into multiple functional groups: Strategy, Project Management, Architecture and Technology, Global Participation, Member Satisfaction, Business Development, etc. While the reorganization feels like a move in the right direction, it doesn't seem to address how difficult it is to start and shepherd work through the W3C. Let's analyze why that is.Why Shepherding Work Through the W3C is OnerousIt is claimed that one of the primary bottlenecks at W3C when it comes to starting a new Working Group is allocating W3C staff member time to the project. The argument goes that there are just not enough W3C staffers for the workload of the organization. This is because W3C exists on a thin gruel of funding from membership fees (as well as other disparate sources) and hiring W3C staff to help support new work is financially burdensome. We have limited resources, so we must only pursue work that has the very highest likelihood of success. This means that parts of the Web Platform ecosystem develop far more slowly than they should.This dynamic has prompted a number of the larger organizations at W3C to push back on new charters citing this bottleneck. This is a misguided strategy. We shouldn't be focused on eliminating as much risk as possible, we should be focused on reducing the cost of making mistakes.As a result of the focus on eliminating as much risk of failure as possible, some of these large organizations have started requesting new requirements for starting work. Some examples include requirements like complete technical specifications, "significant" deployments to customers, and large vendor support. These new requirements are intended to increase the likelihood of success and slow the rate of new work at W3C. While slowing the rate of new work may address the staffing issue, it also creates a situation where the W3C can’t keep up with the required amount of standardization needed for the Web Platform to stay relevant. It also creates a bias in favor of large organizations.We have seen these new requirements ignored when it suits a large organization to do so (e.g. Web Payments and Web Application Security). If a large organization can see how their company financially benefits from an added feature to the Web Platform they seem to have a much easier time getting work started at the W3C as the "starting new work" requirements are not applied to them in the same way as they are to the smaller organizations at W3C.While I won't go into all of them here, there are more double standards, no pun intended, at play when transitioning work to a Working Group at W3C. This dynamic results in significant frustration from the smaller organizations at W3C when the goal posts for starting new work keep changing based on the current desires of the larger organizations.Regardless of whether or not this is a staffing issue, a technology maturity issue, or one of the other new requirements at play at W3C, one thing is certain. The list of requirements for transitioning work from an experimental technology to a Working Group at W3C is not clear and seems to favor larger organizations. It is a source of extreme frustration for those of us that are trying to help build the next generation Web and are NOT working for a multi-billion dollar multinational corporation.The W3C Working Group Formation ChecklistOne remedy for the aforementioned problems is to employ a simple but detailed checklist. This is one of the tools that is currently missing for W3C members.Some have argued that this lack of a clear checklist is by design. Some argue that the formation of every new Working Group is unique and requires discussion and debate. Having participated in starting multiple groups at W3C over the past decade, I disagree. The details differ, but the general topics that are debated remain fairly consistent. The data that you need to support the creation of a Working Group tend to be the same. Here is the checklist that I've put together over the years after much trial and error at W3C. The purpose of the checklist is to gather data so that your community can prove that it has done its due diligence to the W3C membership when requesting the creation of a Working Group.Clearly identify and articulate a problem statement. Do this by sending out a survey to all organizations that you believe may benefit from a standard. You will need at least 35 organizations to become actively involved. I typically end up having to contact close to 100 organizations to get this core group formed. Example: Digital Offers Problem Statement Survey.Create a draft charter for a Working Group that will take the technical specification through the W3C standardization process. The charter time frame should be for no more than 24 months: 6 months to spin up, 12 months to finalize the specs, 6 months to complete interoperability testing. Example: Verifiable Claims Working Group Draft CharterCreate an executive summary for W3C member companies. Summarize the work that the group has done. At this point you have far more content than W3C member company representatives have the time to read to determine if they want to join the initiative. Ease their decision making process by providing a summary. Example: Verifiable Claims Executive Summary for W3C MembersMeasure buy-in for the proposed Working Group Charter and technical deliverables. The best way to do this is via another survey that should be distributed to any organization that participated in step #1, any organization that has joined the work since then, and any W3C member organization that may be impacted by the work. Example: Demonstration of Support for Verifiable Claims Working Group Charter SurveyI don't expect much in the list above to be controversial. There are templates that the W3C membership should create for the surveys and reports above. Having this W3C Working Group Formation Checklist available will help organizations navigate the often confusing W3C Process.Reducing Reliance on W3C StaffThe checklist above makes the process of going from an experimental technology to a W3C Working Group more clear. The checklist, however, does not alleviate the W3C staffing shortage. In fact, if the way W3C staff is utilized does not also change, it might make the current situation worse.The W3C staff play a very important role in that they help organizations navigate how to get things done via the W3C Process. They help build consensus before and during a Working Group activity at W3C. This can be a double-edged sword. If there is a W3C staff member available to help, they can be a fantastic champion for a group's work. If there isn't, and you've never progressed work yourself through W3C, you will find yourself in the unfavorable position of not knowing how to proceed. Many of the staff members also have their own way of navigating the W3C process and many of them have to repeat themselves when new work is started. In short, there is a lot of engineering churn and tribal knowledge at W3C; this is true of most standardization bodies.The unfortunate truth is that building out the Web platform is currently being restricted due to the way that we start and progress new work at W3C. The W3C membership relies on the W3C staff to its detriment. The W3C staff are incredibly helpful, but there aren't enough of them to support building out all of the new functionality needed for the Web, and this is ultimately bad for the Web.In order to address the staffing shortage, it is proposed that we shift much of their work from executing upon the proposed checklist, which is more or less what they do today, to verifying that the checklists are being processed correctly. This is effectively a quality control check on the work that groups are doing as they work their way through the proposed checklist. This offloads a significant chunk of W3C staff time to the groups that want to see their work get traction while giving those groups clear goals to achieve.Blocking Work at W3CW3C members currently have the ability to stop work that has ticked all of the necessary boxes in the checklist mentioned above. At present, large organizations tend to not become involved with work until a vote for the Working Group is circulated by W3C staff. Some members then suggest that they may vote against the work if their viewpoint isn't taken into account even though they have not participated in the work to date. Some call this a part of the process, but it is uselessly frustrating to those that have spent years building consensus around a Working Group proposal only to have a large W3C member company respond with a "Why wasn't I consulted" retort.So, now for the controversial bit:A W3C Community Group's work should automatically transition to a W3C Working Group if a significant coalition of companies, say around 35 of them, have ticked all of the boxes in a predefined W3C Working Group Formation checklist. This checklist should include the creation of two interoperable implementations and a test suite. In other words, it should meet all of the minimum bars for a Working Group's success per the W3C Process. Making this change achieves the following:Each step above would have document templates that W3C members can use as a starting point. None of the steps above require W3C staff resources and if all steps are completed, the formation of a Working Group is the natural outcome and should not be blocked by the membership.This checklist approach may be seen as too constraining for some, and that's why it's voluntary. Some organizations may feel that they do not need to produce all of the work above to get a Working Group and for those organizations, they can choose to ignore the checklist. Initiatives not using the checklist should expect push back if they choose to not answer some of the questions that the checklist covers.Rebalancing How the Web is BuiltThe current process for developing next generation standards for the Web is too unpredictable and too constrained by limited W3C resources. There are groups of small organizations that want to help create these next generation standards; we have to empower those groups with a clear path to standardization. We must not allow organizations to block work if the champions of the work have met the requirements in the checklist. Doing this will free up W3C staffing resources to ensure that the Web Platform is advancing at a natural and rapid rate.A W3C Working Group formation checklist would help make the process more predictable, require less W3C staff time to execute, and provide a smoother path from the inception of an idea, to implementation, to standardization of a technology for the Web platform. This would be great for the Web. -- by @manusporny on August 31, 2016 at 2:46 am Summary: This blog post strongly recommends that the Web Payments HTTP API and Core Messages work be allowed to proceed at W3C.For the past six years, the W3C has had a Web Payments initiative in one form or another. First came the Web Payments Community Group (2010), then the Web Payments Interest Group (2014), and now the Web Payments Working Group (2015). The title of each of those groups share two very important words: Web and Payments . Payments are a big and complex landscape and there have been international standards in this space for a very long time. These standards are used over a variety of channels, protocols, and networks. ISO-8583 (credit cards), ISO-20022 (inter-bank messages), ISO-13616 (international bank account numbers), the list is long and it has taken decades to get this work to where it is today. We should take these messaging standards into account while doing our work.The Web is a big and complex landscape as well. The Web has its own set of standards; HTML, HTTP, URL, the list is equally long with many years of effort to get this work to where it is today. Like payments, there are also many sorts of devices on the Web that access the network in many different ways. People are most familiar with the Web browser as a way to access the Web, but tend to be unaware that many other systems such as banks, business-to-business commerce systems, phones, televisions, and now increasingly appliances, cars, and home utility meters also use the Web to provide basic functionality. The protocol that these systems use is often HTTP (outside of the Web browser) and those systems also need to initiate payments.It seems as if the Web Payments Working Group is poised to delay the Core Messaging and HTTP API. This is important work that the group is chartered to deliver. The remainder of this blog post elaborates on why delaying this work is not in the best interest of the Web.Why a Web Payments HTTP API is ImportantAt least 33% of all payments[1][2], like subscriptions and automatic bill payment, are non-interactive. The Web Payments Working Group has chosen to deprioritize those use cases for the past 10 months. The Web Payments Working Group charter expires in 14 months. We re almost halfway down the road with no First Public Working Draft of an HTTP API or Web Payments Core Messages, and given the current rate of progress and way we re operating (working on specifications in a serial manner instead of in parallel), there is a real danger that the charter will expire before we get the HTTP API out there.The Case Against the Web Payments HTTP API and Core MessagesSome in the group have warned against publication of the Web Payments HTTP API and Core Messages specification on the following grounds:While some of the arguments seem reasonable on the surface, deconstructing them shows that only one side of the story is being told. Let s analyze each argument to see where we end up.The Web Payments HTTP API and Core Messages are a low priority.The group decided that the Web Payments HTTP API and Core Messages specs were a relatively lower priority than the Browser API and the Payment Apps API until June 2016. There was consensus around this in the group and our company agreed with that consensus. What we did not agree to is that the HTTP API and Core Messages are a low priority in the sense that it s work that we really don t need to do. One of the deliverables in the charter of the Working Group is a Web Payments Messages Recommendation. The Charter also specifies that request messages are passed to a server-side wallet, for example via HTTP, JavaScript, or some other approach . Our company was involved in the writing of the charter of this group and we certainly intended the language of the charter to include HTTP, which it does.So, while these specs may be lower priority, it s work that we are chartered to do. This work was one of the reasons that our company joined the Web Payments Working Group. Delaying this work is making a number of us very concerned about what the end result is going to look like. The fact that there is no guiding architecture or design document for the group makes the situation even worse. The group is waiting for an architecture to emerge and that is troubling because we only have around 8 months left to figure this out and then 6 months to get implementations and testing sorted. One way to combat the uncertainty is to do work in parallel as it will help us uncover issues sooner than at the end, when it will be too late.There is a lack of clear support.In the list of concerns, it was noted that: Activity on the issue lists and the repositories for these deliverables has been limited to two organizations This suggests that the Working Group as a whole is not engaged in this work. Previous iterations of the HTTP API and Core Messages specifications have been in development for more than 5 years with far more than two organizations collaborating on the documents. It is true that there has been a lack of engagement in the Web Payments Working Group, primarily because the work was deprioritized. That being said, there are only ever so many people that actively work on a given specification. We need to let people who are willing and able to work on these specifications to proceed in parallel with the other documents that the group is working on.There is low interest from implementers in the group.We were asked to not engage the group, which we didn t, and still ended up with two implementations and another commitment to implement. Note that this is before First Public Working Draft. Commitments to implement are typically not requested until entering Candidate Recommendation, so that there is now a request for implementation commitments before a First Public Working Draft is strange and not a requirement per the W3C Process.If the group is going to require implementations as a prerequisite for First Public Working Group publication, then these new requirements should apply equally to all specifications developed by the group. I personally think this requirement is onerous and sets a bad precedent as it raises the bar for starting work in the group so high that it ll result in a number of good initiatives being halted before they have a chance to get a foothold in the group. For example, I expect that the SEPA Credit Transfer and crypto-currency payment methods will languish in the group as a result of this requirement.The use cases are not yet well understood.It has also been asserted that the basic use cases for the Web Payments HTTP API are not well understood. We have had a use cases document for quite a while now, which makes this assertion shaky. To restate what has been said before in the group, the generalized use case for the HTTP API is simple:A piece of software (that is not a web browser) operating on behalf of a payer attempts to access a service on a website that requires payment. The payee software provides a payment request to the payer software. The payment request is processed and access is granted to the service.Any system that may need to process payments in an automated fashion could leverage the HTTP API to do so. Remember, at least 33% of all payments are automated and perhaps many more could be automated if there were an international standard for doing so.There are more use cases that would benefit from an HTTP API that were identified years ago and placed into the Web Payments Use Cases document: Point of Sale, Mobile, Freemium, Pre-auth, Trialware, In-Vehicle, Subscription, Invoices, Store Credit, Automatic Selection, Payer-initiated, and Electronic Receipts. Additional use cases from the W3C Automotive Working Group related to paying for parking, tolls, and gasoline have been proposed as well.The use cases have been understood for quite some time.It s too soon to conclude that payment messages will share common parts between the Browser API and the HTTP API.The work has already been done to determine if there are common parts and those that have done the work have discovered around 80% overlap between the Browser API messages and the HTTP API messages. Even if this were not the case, I had suggested that we could deal with the concerns in at least two ways:These options were not surfaced to the group in the call for consensus, which is frustrating.The long term effects of pushing off discussion of core messages, however, are more concerning. If we cannot find common messages, then the road we re headed down is one where a developer will have to use different Web Payments messages if payment is initiated via the browser vs. non-browser software. In addition, this further confuses our relationship to ISO-20022 and ISO-8583 and will make Web developers lives far complex than necessary. We re advocating for two ways of doing something when we should be striving for convergence.The group is chartered to deliver a Web Payments Messages Recommendation; I suggest we do that. We are more than 40% of the way through our chartered timeline and we haven t even started having this discussion yet. We need to get this document sorted as soon as possible.The Problem With Our PrioritiesThe problem with our priorities is that we have placed the Browser API front-and-center in the group. The group did this for two reasons:I understand and sympathize with both of these realities, but as a result, the majority of other organizations in the group are now non-browser second-class citizens. This is not a new dynamic at W3C; it happens regularly, and as one of the smaller W3C member companies, it is thoroughly frustrating. It would be more accurate to have named ourselves the Browser Payments Working Group because that is primarily what we ve been working on since its inception and if we don t correct course, that is all we will have time to do. This focus on the browser checkout experience and pushing things out quickly without much thought to the architecture of what we re building does result in product getting out there faster. It is also short-sighted, results in technical debt, and makes it harder to reconcile things like the HTTP API and Core Messages after the Browser API is fully baked . We are supportive of browser specifications, but we are not supportive of only browser specifications.This approach is causing the design of the Web Payments ecosystem to be influenced by the way things are done in browsers to a degree that is deeply concerning. Those of us in the group that are concerned with this direction have been asked to not distract the group by raising concerns related to the HTTP API and Core Messages specifications. Payments and the Web are much bigger than just browsers, it s time that the group started acting accordingly.Parallel and Non-Blocking is a Better ApproachThe Web Payments Working Group has been working on specifications in a serial fashion since the inception of the group. The charter expires in 14 months and we typically need around 6 months to get implementations and testing done. That means we really only have 8 months left to wrap up the specs. We re not going to get there by working on specs in a serial fashion. We need to start working on these issues in parallel. We should stop blocking people that are motivated to work on specifications that are needed in order for the work to be successful. Other groups work in this fashion. For example, the Web Apps Sec group has over 13 specs that they re working on, many of them in parallel. The same is true for the Web Apps working group, which has roughly 32 specs that it s working on, often in parallel. These are extremes, but so is only focusing on one specification at a time in a Working Group.We should start working in parallel. Let s publish the HTTP API and HTTP Core Messages specifications as First Public Working Drafts and get on with it. Summary: This blog post strongly recommends that the Web Payments HTTP API and Core Messages work be allowed to proceed at W3C.For the past six years, the W3C has had a Web Payments initiative in one form or another. First came the Web Payments Community Group (2010), then the Web Payments Interest Group (2014), and now the Web Payments Working Group (2015). The title of each of those groups share two very important words: "Web" and "Payments"."Payments" are a big and complex landscape and there have been international standards in this space for a very long time. These standards are used over a variety of channels, protocols, and networks. ISO-8583 (credit cards), ISO-20022 (inter-bank messages), ISO-13616 (international bank account numbers), the list is long and it has taken decades to get this work to where it is today. We should take these messaging standards into account while doing our work.The "Web" is a big and complex landscape as well. The Web has its own set of standards; HTML, HTTP, URL, the list is equally long with many years of effort to get this work to where it is today. Like payments, there are also many sorts of devices on the Web that access the network in many different ways. People are most familiar with the Web browser as a way to access the Web, but tend to be unaware that many other systems such as banks, business-to-business commerce systems, phones, televisions, and now increasingly appliances, cars, and home utility meters also use the Web to provide basic functionality. The protocol that these systems use is often HTTP (outside of the Web browser) and those systems also need to initiate payments.It seems as if the Web Payments Working Group is poised to delay the Core Messaging and HTTP API. This is important work that the group is chartered to deliver. The remainder of this blog post elaborates on why delaying this work is not in the best interest of the Web.Why a Web Payments HTTP API is ImportantAt least 33% of all payments[https://www.nacha.org/news/ach-volume-increases-23-billion-payments-2014">1][],">http://creditcardforum.com/blog/credit-card-statistics/">2], like subscriptions and automatic bill payment, are non-interactive. The Web Payments Working Group has chosen to deprioritize those use cases for the past 10 months. The Web Payments Working Group charter expires in 14 months. We're almost halfway down the road with no First Public Working Draft of an HTTP API or Web Payments Core Messages, and given the current rate of progress and way we're operating (working on specifications in a serial manner instead of in parallel), there is a real danger that the charter will expire before we get the HTTP API out there.The Case Against the Web Payments HTTP API and Core MessagesSome in the group have warned against publication of the Web Payments HTTP API and Core Messages specification on the following grounds:While some of the arguments seem reasonable on the surface, deconstructing them shows that only one side of the story is being told. Let's analyze each argument to see where we end up.The Web Payments HTTP API and Core Messages are a low priority.The group decided that the Web Payments HTTP API and Core Messages specs were a relatively lower priority than the Browser API and the Payment Apps API until June 2016. There was consensus around this in the group and our company agreed with that consensus. What we did not agree to is that the HTTP API and Core Messages are a low priority in the sense that it's work that we really don't need to do. One of the deliverables in the charter of the Working Group is a Web Payments Messages Recommendation. The Charter also specifies that "request messages are passed to a server-side wallet, for example via HTTP, JavaScript, or some other approach". Our company was involved in the writing of the charter of this group and we certainly intended the language of the charter to include HTTP, which it does.So, while these specs may be lower priority, it's work that we are chartered to do. This work was one of the reasons that our company joined the Web Payments Working Group. Delaying this work is making a number of us very concerned about what the end result is going to look like. The fact that there is no guiding architecture or design document for the group makes the situation even worse. The group is waiting for an architecture to emerge and that is troubling because we only have around 8 months left to figure this out and then 6 months to get implementations and testing sorted. One way to combat the uncertainty is to do work in parallel as it will help us uncover issues sooner than at the end, when it will be too late.There is a lack of clear support.In the list of concerns, it was noted that:"Activity on the issue lists and the repositories for these deliverables has been limited to two organizations... This suggests that the Working Group as a whole is not engaged in this work."Previous iterations of the HTTP API and Core Messages specifications have been in development for more than 5 years with far more than two organizations collaborating on the documents. It is true that there has been a lack of engagement in the Web Payments Working Group, primarily because the work was deprioritized. That being said, there are only ever so many people that actively work on a given specification. We need to let people who are willing and able to work on these specifications to proceed in parallel with the other documents that the group is working on.There is low interest from implementers in the group.We were asked to not engage the group, which we didn't, and still ended up with two implementations and another commitment to implement. Note that this is before First Public Working Draft. Commitments to implement are typically not requested until entering Candidate Recommendation, so that there is now a request for implementation commitments before a First Public Working Draft is strange and not a requirement per the W3C Process.If the group is going to require implementations as a prerequisite for First Public Working Group publication, then these new requirements should apply equally to all specifications developed by the group. I personally think this requirement is onerous and sets a bad precedent as it raises the bar for starting work in the group so high that it'll result in a number of good initiatives being halted before they have a chance to get a foothold in the group. For example, I expect that the SEPA Credit Transfer and crypto-currency payment methods will languish in the group as a result of this requirement.The use cases are not yet well understood.It has also been asserted that the basic use cases for the Web Payments HTTP API are not well understood. We have had a use cases document for quite a while now, which makes this assertion shaky. To restate what has been said before in the group, the generalized use case for the HTTP API is simple:A piece of software (that is not a web browser) operating on behalf of a payer attempts to access a service on a website that requires payment. The payee software provides a payment request to the payer software. The payment request is processed and access is granted to the service.Any system that may need to process payments in an automated fashion could leverage the HTTP API to do so. Remember, at least 33% of all payments are automated and perhaps many more could be automated if there were an international standard for doing so.There are more use cases that would benefit from an HTTP API that were identified years ago and placed into the Web Payments Use Cases document: Point of Sale, Mobile, Freemium, Pre-auth, Trialware, In-Vehicle, Subscription, Invoices, Store Credit, Automatic Selection, Payer-initiated, and Electronic Receipts. Additional use cases from the W3C Automotive Working Group related to paying for parking, tolls, and gasoline have been proposed as well.The use cases have been understood for quite some time.It's too soon to conclude that payment messages will share common parts between the Browser API and the HTTP API.The work has already been done to determine if there are common parts and those that have done the work have discovered around 80% overlap between the Browser API messages and the HTTP API messages. Even if this were not the case, I had suggested that we could deal with the concerns in at least two ways:The first was to mark this concern as an issue in the specification before publication.The second was to relabel the "Core Messages" as "HTTP Core Messages" and change the label back if the group was able to reconcile the messages between the Browser API and the HTTP API.These options were not surfaced to the group in the call for consensus, which is frustrating.The long term effects of pushing off discussion of core messages, however, are more concerning. If we cannot find common messages, then the road we're headed down is one where a developer will have to use different Web Payments messages if payment is initiated via the browser vs. non-browser software. In addition, this further confuses our relationship to ISO-20022 and ISO-8583 and will make Web developers lives far complex than necessary. We're advocating for two ways of doing something when we should be striving for convergence.The group is chartered to deliver a Web Payments Messages Recommendation; I suggest we do that. We are more than 40% of the way through our chartered timeline and we haven't even started having this discussion yet. We need to get this document sorted as soon as possible.The Problem With Our PrioritiesThe problem with our priorities is that we have placed the Browser API front-and-center in the group. The group did this for two reasons:A subset of the group wanted to improve the "checkout experience" and iterate quickly instead of focusing on initiating payments, which was more basic and easier to accomplish.A subset of the group was very concerned that the browser vendors would lose interest if their work was not done first.I understand and sympathize with both of these realities, but as a result, the majority of other organizations in the group are now non-browser second-class citizens. This is not a new dynamic at W3C; it happens regularly, and as one of the smaller W3C member companies, it is thoroughly frustrating. It would be more accurate to have named ourselves the Browser Payments Working Group because that is primarily what we've been working on since its inception and if we don't correct course, that is all we will have time to do. This focus on the browser checkout experience and pushing things out quickly without much thought to the architecture of what we're building does result in "product" getting out there faster. It is also short-sighted, results in technical debt, and makes it harder to reconcile things like the HTTP API and Core Messages after the Browser API is "fully baked". We are supportive of browser specifications, but we are not supportive of only browser specifications.This approach is causing the design of the Web Payments ecosystem to be influenced by the way things are done in browsers to a degree that is deeply concerning. Those of us in the group that are concerned with this direction have been asked to not "distract" the group by raising concerns related to the HTTP API and Core Messages specifications. "Payments" and the "Web" are much bigger than just browsers, it's time that the group started acting accordingly.Parallel and Non-Blocking is a Better ApproachThe Web Payments Working Group has been working on specifications in a serial fashion since the inception of the group. The charter expires in 14 months and we typically need around 6 months to get implementations and testing done. That means we really only have 8 months left to wrap up the specs. We're not going to get there by working on specs in a serial fashion. We need to start working on these issues in parallel. We should stop blocking people that are motivated to work on specifications that are needed in order for the work to be successful. Other groups work in this fashion. For example, the Web Apps Sec group has over 13 specs that they're working on, many of them in parallel. The same is true for the Web Apps working group, which has roughly 32 specs that it's working on, often in parallel. These are extremes, but so is only focusing on one specification at a time in a Working Group.We should start working in parallel. Let's publish the HTTP API and HTTP Core Messages specifications as First Public Working Drafts and get on with it. -- by @manusporny on August 13, 2016 at 9:52 pm An important aspect of systems design is understanding the trade-offs you are making in your system. These trade-offs are influenced by a variety of factors: latency, safety, expressiveness, throughput, correctness, redundancy, etc. Systems, even ones that do effectively the same thing, prioritize their trade-offs differently. There is rarely one perfect solution to any problem.It has been asserted that Unconstrained JSON-LD Performance Is Bad for API Specs. In that article, Dr. Chuck Severance asserts that the cost of JSON-LD parsing is 2,000 times more costly in real time and 70 times more costly in CPU time than pure JSON processing. Sounds bad, right? So, lets unpack these claims and see where the journey takes us.TL;DR: Don t ever put a system that uses JSON-LD into production without thinking about your JSON-LD context caching strategy. If you want to use JSON-LD, your strategy should probably either be: Cache common contexts and do JSON-LD processing or don t do JSON-LD processing, but enforce an input format via something like JSON Schema on your clients so they re still submitting valid JSON-LD.The Performance Test SuiteAfter reading the article yesterday, Dave Longley (the co-inventor of JSON-LD and the creator of the jsonld.js and php-json-ld libraries) put together a test suite that effectively re-creates the test that Dr. Severance ran. It processes a JSON-LD Schema.org Person object in a variety of ways. We chose to change the object because the one that Dr. Severance chose is not necessarily a common use of JSON-LD (due to the extensive use of CURIEs) and we wanted a more realistic example (that uses terms). The suite first tests pure JSON processing, then JSON-LD processing w/ a cached schema.org context, and then JSON-LD processing with an uncached schema.org context. We ran the tests using the largely unoptimized JSON-LD processors written in PHP and Node vs. a fully optimized JSON processor written in C. The tests used PHP 5.6, PHP 7.0, and node.js 4.2.So, with Dr. Severance s data in hand and our shiny new test suite, let s do some science!The ResultsThe raw output of the test suite is available, but we ve transformed the data into a few pretty pictures below.The first graph shows the wall time performance hit using basic JSON processing as the baseline (shown as 1x). It then compares that against JSON-LD processing using a cached context and JSON-LD processing using an uncached context. Wall time in this case means the time taken if you were to start a stop watch from the start of the test to the end of the test. Take a look at the longest bars in this graph:As we can see, there is a significant performance hit any way you look at it. Wall time spent on processing JSON-LD with an uncached context in PHP 5 is 7,551 times slower than plain JSON processing! That s terrible! Why would anyone choose to use JSON-LD with such a massive performance hit! Even when you take out the time spent just sitting around, the CPU cost (for running the network code) is still pretty terrible. Take a look at the longest bars in this graph:CPU processing time for JSON vs. JSON-LD with an uncached context in PHP 5 is 260x slower. For PHP 7 it s 239x slower. For Node 4.2, it s 140x slower. Sounds pretty dismal, right? JSON-LD is a performance hog but hold up, let s examine why this is happening.JSON-LD adds meaning to your data. The way it does this is by associating your data with something called a JSON-LD context that has to be downloaded by the JSON-LD processor and applied to the JSON data. The context allows a system to formally apply a set of rules to data to determine whether a remote system and your local system are speaking the same language . It removes ambiguity from your data so that you know that when the remote system says homepage and your system says homepage , that they mean the same thing. Downloading things across a network is an expensive process, orders of magnitude slower than having something loaded in a CPUs cache and executed without ever having to leave home sweet silicon home.So, what happens when you tell the program to go out to the network and fetch a document from the Internet for every iteration of a 1,000 cycle for loop? Your program takes forever to execute because it spends most of it s time in network code and waiting for I/O from the remote site. So, this is lesson number 1. JSON-LD is not magic. Things that are slow (because of physics) are still slow in JSON-LD.Best Practice: JSON-LD Context CachingAccessing things across a network of any kind is expensive, which is why there are caches. There are primary, secondary, and tertiary caches in our CPUs, there are caches in our memory controllers, there are caches on our storage devices, there are caches on our network cards, there are caches in our routers, and yes, there are even caches in our JSON-LD processors. Use those caches because they provide a huge performance gain. Let s look at that graph again, and see how much of a performance gain we get by using the caches (look at the second longest bars in both graphs):CPU processing time for JSON vs. JSON-LD with a cached context in PHP 5 is 67x slower. For PHP 7 it s 35x slower. For Node 4.2, it s 18x slower. To pick the worst case, 67x slower (using a cached JSON-LD Context) is way better than 7,551x slower. That said, 67x slower still sounds really scary. So, let s dig a bit deeper and put some processing time numbers (in milliseconds) behind these figures:These numbers are less scary. In the common worst case, where we re using a cached context in the slowest programming language tested, it will take 2ms to do JSON-LD processing per CPU core. If you have 8 cores, you can process 8 JSON-LD API requests in 2ms. It s true that 2ms is an order of magnitude slower than just pure JSON processing, but the question is: is it worth it to you? Is gaining all of the benefits of using JSON-LD for your industry and your application worth 2ms per request?If the answer is no, and you really need to shave off 2ms from your API response times, but you still want to use JSON-LD don t do JSON-LD processing. You can always delay processing until later by just ensuring that your client is delivering valid JSON-LD; all you need to do that is apply a JSON Schema to the incoming data. This effectively pushes JSON-LD processing off to the API client, which has 2ms to spare. If you re building any sort of serious API, you re going to be validating incoming data anyway and you can t get around that JSON Schema processing cost.I ve never had a discussion with someone where 2 milliseconds was the deal breaker between JSON-LD processing and not doing it. There are many things in software systems that eat up more than 2 milliseconds, but JSON-LD still gives you the choice of doing the processing at the server, pushing that responsibility off to the client, or a number of other approaches that provide different trade-offs.But Dr. Severance said There are a few parting thoughts in Unconstrained JSON-LD Performance Is Bad for API Specs that I d be remiss in not addressing. JSON-LD evangelists will talk about caching this of course is an irrelevant argument because virtually all of the shared hosting PHP servers do not allow caching so at least in PHP the caching fixes this is a useless argument. Any normal PHP application in real production environments will be forced to re-retrieve and re-parse the context documents on every request / response cycle.phpFastCache exists, use it. If for some reason you can t, and I know of no reason you couldn t, cache the context by writing it to disk and retrieving it from disk. Most modern operating systems will optimize this down to a very fast read from memory. If you can t write to disk in your shared PHP hosting environment, switch to a provider that allows it (which are most of them).even with cached pre-parsed [ed: JSON-LD Context] documents the additional order of magnitude is due to the need to loop through the structures over and over, to detect many levels of *potential* indirection between prefixes, contexts, and possible aliases for prefixes or aliases.That is not how JSON-LD processing works. Rather than go into the details, here s a link to the JSON-LD processing algorithms.json_decode is written in C in PHP and jsonld_compact is written in PHP and if jsonld_compact were written in C and merged into the PHP core and all of the hosting providers around the world upgraded to PHP 12.0 it means that perhaps the negative performance impact of JSON-LD would be somewhat lessened when pigs fly .You can do JSON-LD processing in 2ms in PHP 5, 0.7ms in PHP 7, and 1ms in Node 4. You don t need a C implementation unless you need to shave those times off of your API calls.If the JSON-LD community actually wants its work to be used outside the Semantic Web backwaters or in situations where hipsters make all the decisions and never run their code into production, the JSON-LD community should stand up and publish a best practice to use JSON-LD in a way that maintains compatibility with JSON so that APIs and be interoperable and performant in all programming languages. This document should be titled High Performance JSON-LD and be featured front and center when talking about JSON-LD as a way to define APIs.I agree that we need to write more about high performance JSON-LD API design, because we have identified a few things that seem best-practice-y. The problem is that we ve been too busy drinking our tripel mocha lattes and riding our fixies to the latest butchershop-by-day-faux-vapor-bar-by-night flashmob experiences to get around to it. I mean, we are hipsters after all. Play-play balance is important to us and writing best practices sounds like a real drag.

TAGS:Beautiful The Machine 

<<< Thank you for your visit >>>

Websites to related :
Unmotherly Insights | By Debra R

  I am not a native to the PNW. I will never be a lifer. I came here from decades in sunny Scottsdale, AZ and sparkly, smoggy Los Angeles, CA. However,

Pokémon Online Game - PokéHero

  PokéHeroes is a fanmade Pokémon online game where you explore the legendary mysteries and secrets of Emera Town. You adopt Pokémon eggs, hatch the

Bungee Plug® - Blowout Plug for

  Bungee-Plug® The only blow out plugyou'll ever want. SEE HOW IT WORKS Bungee-Plug® | Blowout Plug for In-Ground Poolsnina@minorkeysmedia.com2021-

ZipLineGear – Home of Ba

  We know the process can be a bit overwhelming and we re here to help.  Schedule a free 25 minute consult to help you plan your zip line project.  E

Home | CodeBaby

  It s time for emotional intelligenceYou hear so much about how Artificial Intelligence is changing business.We make it easy for you to better serve yo

Barçın | Online Spor Ayakkabı

  Kişisel verilerin korunması kapsamında aydınlatma metnini okudum ve açık rıza veriyorum. BARÇIN SPOR TİC. VE SAN. A.Ş. Kişisel Verilerin Ko

Kyle Bean - Portfolio

  Newsletter If you want to be kept up to date with my latest work, sign up below. Email address

trivago.ro – Comparare prețur

  Ne pare rău, dar nu mai susținem această versiune a browser-ului dvs.. Vă rugăm să actualizați la o versiune nouă pentru a putea accesa site-u

Cazare Romania - Ghid Turistic R

  Cele mai avantajoase oferte de cazare din Romania, special selectionate, direct la proprietari! Hotel Bucovina Suceava Hotel Bucovina din Suceava est

Trucking Company - Transportatio

  Balancing the Freight Equation through a comprehensive suite of transportation and logistics services along with equipment offerings to compliment you

ads

Hot Websites