The Security Practice

Web Name: The Security Practice

WebSite: http://www.thesecuritypractice.com

ID:289215

Keywords:

The,Security,Practice

Description:


The Security Practice

July 06, 2021

The Journey to Universal HTTPS

Recently I was reading the book “The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger”.  I was struck by how many pieces and components had to fit together to achieve this big change, and wanted to reflect on a similar change in the security space - the nearly universal adoption of HTTPS for the web.

HTTPS has been around since 1994.  When it first came out we were mired in the crypto-wars in the US, and deploying encrypted protocols was still very tricky.   In the early days you really had to jump through hoops - manually adding modules to your web server (they had to be gotten separately in the Unix world), and some even bigger hoops around getting a certificate which was usually expensive.

HTTPS was confusing and hard to turn on, and at the time performance was somewhat/ hugely problematic.

In 2007 when I was working at PayPal, after some articles about phishing, downgrade attacks on users, etc. we made the decision to move the whole website to HTTPS for all pages - not just the logged-in ones.  This put us fairly on the cutting edge for the time - many large companies including banks had websites that were mostly HTTP, even the login screen - which submitted to an HTTPS endpoint.

For sites trying to convert universally to HTTPS, there were a few major obstacles:

There were lots of performance concerns about putting all content behind HTTPS - many believed the costs of HTTPS couldn’t be justified for “public” content.Purchasing enough certificates/licenses for all of your webservers/domains was still a pretty costly endeavor.The conventional wisdom among the SEO crowd was that HTTPS hurt your Google page-rank.

At PayPal we also wanted more of the web to move to HTTPS, and luckily were able to work in concert with Google and several other folks on these topics to address many of people’s general concerns.

Adam Langley put out a post - https://www.imperialviolet.org/2010/06/25/overclocking-ssl.html - explaining why SSL/TLS was no longer a major performance issue for most use cases.Matt Cutts, Google Page-Rank guru released some statements/ guidance that HTTPS and the slight negative performance impact would not negatively affect page-rank.Google later took an even stronger position - HTTPS would positively impact page-rank. Let’s Encrypt was started in 2012 with the explicit goal of making it trivially easy to set up, use, and also renew certificates for HTTPS.  There were free certificate authorities, but none before included the ease of use and implementation guts.

From where we are today - maybe all of this looks inevitable - but it has taken a lot of years of concerted work by many parties aligning incentives and making HTTPS-by-default easy - to get where we are today.

October 22, 2019

Combating Cybercrime - the 2011 Edition

Once upon a time back in 2011 Michael Barrett, Bill Smith, and I authored a paper on ideas for combating cybercrime.  They ideas were mostly in the areas of internet governance and public policy rather than tech.  That paper has long since vanished from the PayPal website, but I'm republishing it here since it was formally widely available.

 

Download PayPal_CombatingCybercrime_WP_0411_v1

March 22, 2013

Enterprise Networks Need to Prepare for the Security Impact of New TLDs

Last week, after speaking with a member of ICANN’s Security and Stability Advisory Committee, Bill Smith and I authored a letter to ICANN expressing our concern with the proposed and potential delegation of certain names, such as “.corp” or “.internal”, that are currently in wide use as de facto internal network names.  You can read the full text of the letter here:

http://forum.icann.org/lists/bc-gnso/pdfNFDozNA9Ka.pdf

For many years it has been common practice to configure network services inside the organizational perimeter with suffixes not in the set of publicly-delegated top-level domains.  This practice has been recommended by various large software vendors and experts for security, manageability and conflict avoidance.  Unfortunately for those who heeded that advice, times are changing, and their internal names may soon conflict with the public DNS. 

Not only will the public delegation of these names create operational issues for internal networks, it may impose serious security risks on unprepared systems.  Even if network operators block external resolution of these delegations within their networks, they need to consider what may happen when a corporate laptop roams to a public network at a WiFi hotspot or on a user’s home Internet connection. 

Such systems are typically configured with automatic connectivity to a variety of services, including email, instant messaging, agent-based monitoring and maintenance, directory and policy services, and a variety of web applications.  These server endpoints are often authenticated using TLS, and any certificate that chains to a root in the trust store will be treated as valid.  As a consequence, when such a system roams off the internal network to a public one, it may soon find itself connecting to servers owned by the public registrant, considering them valid, and sending confidential information, including emails, sensitive files, passwords, HTTP cookies, and other authentication tokens.  A maliciously operated server could likely force damaging state changes on clients in addition to harvesting sensitive data.

ICANN has identified the top ten such names currently being queried against the public roots, and RFC 6762 identifies another partially overlapping set of names recommended for Multicast DNS.  It is our hope that ICANN and the applicants for these names will be able to agree on an appropriate course of action to protect the security and stability of the many systems currently relying on these names.  We recommend that at least these most commonly used names be permanently reserved for internal-only use, and individual applications be considered on a case-by-case basis for their impact in this regard.

Regardless of the outcome at ICANN, such network configurations should no longer be considered a best practice.  Rather than reducing risk to enterprise systems, they will soon increase it. Though few of the most-commonly used names have been applied for in the current round of new delegations, administrators should verify that no conflicts exist with the particular names in use on their networks: http://newgtlds.icann.org/en/program-status/application-results/strings-1200utc-13jun12-en 

Even if no conflicting name is proposed for delegation in the current round, such conflicts may arise in the future. Changing existing networks configured in this manner will likely take several years, and administrators should begin the process of migrating away from reliance on “internal TLDs” as soon as possible.

-Brad Hill

November 30, 2012

In Defense of HTML5

Many of the broad family of specifications commonly grouped under the “HTML5” umbrella are scheduled to be completed in 2013, and with the release of Internet Explorer 10, the users of every major web browser flavor can enjoy rich Web apps written on the open web platform, with no need for plugins. 

Lots of people are excited about HTML5, but one group I don’t see as particularly excited are security experts, or perhaps they’re only excited in a rather cynical fashion.  Full employment!  Browser botnets! A lifetime of conference talks!  And the malediction against HTML5 isn’t just coming from folks with a product to sell or a slide deck to submit – HTML5 has become a common boogeyman representing out-of-control complexity and vast attack surface for some of the very best analysts and researchers in the field.  So, although developers are racing to embrace it, CISOs, CIOs and enterprise
security decision makers as a group seem wary.

Frankly this puzzles and distresses me, because from my perspective, HTML5 is a key part – perhaps the most important part – in one of the greatest security success stories in the history of computing.  The story of the web browser over the last decade is the story of something completely unprecedented – a tremendous increase in functionality and use that happened side-by-side with a tremendous decrease in  vulnerability and attack surface.   Don’t believe me?  Let’s go back a decade…

2002 was an important time for me personally, as it’s right about the time I decided to move from being a developer interested in security to a full-time security professional.  It was Thanksgiving, and my family all gathered at my Aunt’s house.  She asked if I could take a look at her computer – it was running the latest and greatest Windows XP and was only a few months old – but it took 20 minutes to boot up and was slow as a dog.  Well, over the course of that weekend, my brother, father and I, with fifty years of computing experience between us, spent over 30 hours painstakingly removing 5000 viruses and pieces of malware from her system.  My cousins had invited some of it by downloading pirated games, but much of it was simply from browsing the web – and I was soon getting similar support requests from friends and family with decidedly vanilla browsing habits.  As I spent hours cleaning people’s computers, it was clear to me that the industry was in real trouble on the security front, and this would be a good direction to take my career.  In the decade since, I’ve been on the front lines of the Web security battle, working on everything from web apps to browsers, to operating systems and web standards.

And let me tell you, if you think HTML5 is a security disaster, you’ve utterly forgotten where we were ten years ago.  The first browser wars had mostly wound down, leaving Internet Explorer 6 with a commanding lead in market share, on both XP and the Mac.  HTML4 was the lingua franca of the Web, but the Rich Web was already here.  That’s one of the things that I think is most neglected in criticisms of HTML5 – most of the "new attack surface” isn’t actually new.  Sockets, cross-origin communications, multimedia, background processing, local storage – all the key components of the Rich Web Application were already in wide use by 2002, but they were in Flash, Java and ActiveX.  That’s the real benchmark against which we must compare HTML5. 

Some will argue this isn’t a fair comparison, (especially ActiveX) but it was the fact on the ground. One or more of those plugin technologies was installed on better than 97% of browsers, and they were in wide use.  You didn’t really have a choice: by 2003, I would guess 10-15% of the Web was unusable without Flash, because so many sites used it for their most basic navigation features. 

While Java and Flash were designed with security in mind, they were also designed to compete for developers.  As long as security met an only casually scrutinized minimum, what developers were really interested in was features.  How much would it let you do?  Around this time I coined a maxim after John Gilmore's more famous one: “Developers on the web interpret security restrictions as damage and route around them.” And once these technologies had signed on developers, the platforms had very little security pressure on them.  If you as a developer became unhappy with the security flaws of your platform, the cost to switch was incredibly high – you had to rewrite your entire application.  As a consumer, it was even worse – you couldn’t switch, you had to live with the platform choices of application authors, or do without.

As for ActiveX?  Yes, it was a by-design arbitrary code execution technology, unlike the sandboxed Java and Flash runtimes.  But it’s still important to have it there, stacked up against HTML5, because it was what people used to deliver Rich Web Applications.  There were lots of legitimate ActiveX controls, and lots of legitimate sites trained users to accept ActiveX prompts (when the browser prompted at all) a little too readily.  But even legitimate ActiveX controls were far more dangerous than Flash or Java.  Most were just wrappers around a big blob of unsafe legacy code, never designed to be deployed in a hostile environment.  Outside of those I looked at written by Microsoft itself, in my years of pentesting I never encountered an ActiveX control that didn’t fall over in the first five minutes of fuzzing, and I almost never encountered one that was site-locked.  This meant that even if you were careful about what controls you approved, the ones you’d accepted as safe to use in a non-malicious web page or that were from trusted authors could still be silently instantiated on any malicious page and trivially exploited with the most basic stack-based buffer overflows.

And the browsers themselves were little better.  IE 6 was notorious Swiss-cheese, and the reality behind it was really far worse than probably almost anyone realizes.  Remember, the browser wars had led to the same market pressures as those affecting plugin technologies.  Build more features, lock developers in, expose as many APIs as possible, integrate as deeply as you can with the OS and do it fast, fast, fast.   Brendan Eich has told the story many times of how, at Mozilla, he only had 10 days to design and implement JavaScript and, despite its brilliance, he is the first to admit we’re still also living uneasily with its mistakes today. That story is just one famous example of the state of the whole industry in the 90’s.  There was little or no time for security review, no concern for attack surface, and once anything went into the customers’ hands, it was very hard to claw back.

It was only after drive-by installs basically destroying customers’ computers – like my Aunt’s – threatened Microsoft’s business to the core that they really had the courage to start to roll things back and break things as they built IE7 and Vista.  I was there for some of that,  so I can say, “wow”.  It was heroic work, and the depths of the attack surface of IE6 were astounding. It’s one of the reasons why IE 7 on Vista was the first major browser to have a serious sandbox – it was the only way to cope with the complexity.  Does anyone remember the “Explorer view” that let you see a web directory listing as if it were part of your local filesystem in Windows Explorer?  That integration to
the core OS shell meant there was hardly a single line of code in the OS you couldn’t reach with tainted data from a random web page.  Or have you heard of “binary behaviors” – ActiveX controls that could be silently attached to CSS properties?  Microsoft didn’t just remove that one, they scrubbed any mention of it from MSDN, to boot.  And take a look through the stuff that only gets enabled in the “trusted zone” today to see some of the more dangerous things that were available everywhere in IE5 and 6.  Netscape, for its part, was better only by virtue of being OS-agnostic – it was still turning out its own dangerous features, many of which are just now being removed as it
implements HTML5.

But if IE7 was the start of a turnaround, it was only a start.  Plugins remained ubiquitous, and the rest of the Web caught mashup mania, which pulled us in another dangerous direction.   To give one example, before it was acquired, WebEx claimed its flagship Connect product was going to revolutionize the industry by providing a mashup environment that was basically a wrapper around IE that removed the Same-Origin Policy.  This sounds absurd now, but they were far from the only company putting serious effort into this kind of thing – remember, “developers on the Web interpret security restrictions as damage and route around them.”

The real force that changed things with Web and Browser security came from something widely recognized as revolutionary, but not in this particular domain:  the iPhone, when Apple declared that they would not allow Flash, Java or other plugins on the platform.  Though they claim it was for security and reliability reasons, let’s not kid ourselves – these were ways to get content and applications onto a closed platform without paying Apple - along with jailbreaking, which was the reason Apple finally started paying serious attention to platform security.  Despite these selfish motives, in the end, this turned out to be the start of the best thing that’s ever happened to Web security.  Because even with apps to fill some of the gap, everyone still wanted the Rich Web on their iPhone.

And the coolness of the iPhone meant that every developer wanted to target the platform – so they had to get serious about looking for ways to do it that didn’t involve plugins.  From this grew much of HTML5’s momentum to create a standards-based platform for the Rich Web.  And with Google’s Chrome entering the market shortly after, we had the start of a “new browser war”, but a war that looked very different than the first one – because it was driven by standards.

And what does it mean to have an open, standards-driven platform as the “ground rules” for a new browser war?  It means that browsers are competing on how fast and how well they implement those standards, how fast the browsers themselves are, and how secure they are.  In particular, they have to answer to users’ security concerns, much more than do plugin vendors, because when all web apps work in all browsers, the cost to the consumer of switching if they are impacted, or even if they hear that a browser is insecure, is very low.

And standards authors are not beholden to individual customers of their features – they are willing to break things – like the example of WebSockets, where public research revealed serious vulnerabilities that necessitated a fix that broke every existing application and implementation.  We simply never saw that kind of thing happen in the first browser wars.  And it is also noteworthy that, while the standards process is often derided for its slowness, a deliberate pace combined with public review means that HTML5 specs have gotten better security scrutiny than any browser features have ever before.  The HTML5 family was not only better designed, learning from the lessons of Rich Web 1.0, but even the new ground it broke was subject to incredible advance security review by the best experts in the industry.  That’s something that just can’t happen when you give one guy in one company 10 days to design and implement something.

So, what’s our scorecard finally look like?  Ten years ago, IE 6 had something like 80% market share, was full of trivially exploitable memory corruption flaws, un-sandboxed, and deeply wormed into the OS, exposing tens of millions of lines of unhardened code to the Web attack surface.  Nearly all the features people worry about with HTML5 were already implemented, in multiple plugin systems that had >95% penetration, each with a different security model, all also un-sandboxed, generally with worse code quality than the browser itself.  Beyond that, users frequently used ad-hoc blobs of unsafe code that ran with full privileges and was almost always highly vulnerable to trivial exploit.  And all the competitive pressures at the time were making things worse, not better. The result for users was clear – constant vulnerability and systems infected on a daily basis by adware, malware and botnets.

Today, largely thanks to HTML5, for the first time since Netscape 2 we have a large number of users browsing the web in environments that don’t support binary plugins at all.  Rich Web application authors write and deliver their code in a memory-safe language, JavaScript, that lives inside a Same-Origin Policy sandbox, that further lives in a browser sandbox.  The old plugin systems are still there on the desktop, and still the source of many of the worst vulnerabilities – but they are fading fast.  Little new content is being developed for them and much existing content is being converted to reach the new mobile audiences. 

App developers now have one programming and one security model to learn to write secure apps.  It’s more complex than HTML4, but less complex than HTML4 + Java + Flash + ActiveX.  And it’s designed to be secure – compare WebSockets to their Java equivalent, or CORS to its Flash and Silverlight counterparts and the difference is clear. (Scott Stender and I gave a talk that goes into much more detail on writing secure apps with HTML5 at the first W3Conf, watch it here: http://www.w3.org/conf/2011/)

All major browsers now have rapid-self update systems that keep them patched, and users can switch browsers for security improvements at little or no cost.  Instead of every penny-ante spammer putting drive-by malware on your system, Google finds it has to pay $60,000 and offer a lot of public prestige to get bugs turned in through their Pwnium events.  On the black market, such bugs fetch six figures.

But wait, you say, I hear about these bugs in the news every couple of weeks!  Exactly!  This kind of bug is man-bites-dog now.  Nobody would’ve bothered to write a breathless news story about a memory corruption crasher in Netscape 6 – after all, who would listen or care when browsers crashed hourly from non-malicious input, anyway?

So, security people, stop the fear and hate for HTML5, because what we witnessed in the browser world in the last decade was, honestly, a miracle.  We saw a class of applications grow exponentially in their use and complexity, while at the same time not just fixing bugs and becoming practically more secure – but drastically and systematically reducing their attack surface.  I don’t think that’s ever happened before – functionality and attack surface moving in opposite directions, so dramatically, with so much momentum, and for so long.  And it didn’t take any regulatory or agency incentives – it began with self-motivated business decisions by companies like Microsoft, Apple and Google, and the open standards process turned it into a virtuous cycle for users and the ecosystem as a whole.

So, if you’ve been worried about the security implications of HTML5, don’t worry – it’s here to help, and it already has. We still have a long way to go in securing the web, and we must remain vigilant and revise and improve standards as we learn, but we are in a much better position than we were ten years ago.

 

-Brad Hill

October 05, 2012

Certificate Authorities asked to step up for Internet security; CABF takes a step back instead.

Hello, Michael Barrett here;

Back in February, my team and I came to the CA/Browser Forum to initiate a reform process to help grow that organization into a more mature and capable body, able to address the growing threats to trust on the Internet and the looming crisis of confidence in the Certificate Authority system.

 

In the wake of the DigiNotar breach and other weaknesses appearing across the HTTPS ecosystem, we felt that several systematic problems stood in the way of improving this critical part of the Internet’s trust infrastructure – problems that would require coordination among browsers, CAs and website operators to solve.  Although the CA/Browser Forum was not set up to deal with these problems in its current form, it was the best thing available – the place where we had most of “the right people in the room”, representing the vast majority of the market share of web browsers, operating systems, smartphone platforms and the certificate issuers themselves – we just needed to have representation for users to establish legitimate technical trust requirements.

 

In response to our request, the organization agreed to solicit ideas and comments from the larger Internet community for the first time in its history, and together we embarked on a process to explore ways to move the organization forward. That process came to an end with a vote among current Forum members few days ago, and the organization has chosen not step into a larger role.  Despite calls from across the Internet community for openness and inclusiveness,  the CA/Browser Forum chose to remain a closed group concerned primarily with the details of audit requirements for browser trust store programs; visible to, but with no vote for, the user community. With this action, we believe the CABF remains a crippled institution that cannot adequately address the full range of certificate issues impacting literally billions of Internet users. Sadly, the moment that the CA/Browser Forum has chosen to make its proceedings publicly visible also appears to be the moment they’ve decided they should be largely irrelevant. 

 

The issues for which the Forum has chosen to sidestep responsibility remain and grow more urgent.  While disappointed, my team and I remain committed to the Internet being a safe and secure place to socialize, organize, communicate and conduct commerce.   

 

We will work where we can to advance and preserve an Internet that deserves users' trust: in public/private partnerships, in direct collaboration with industry partners, in organizations like the IETF, W3C, and other fora where we and others can participate on an equitable basis. Given the importance of these issues and the continued lack of any credible multi-stakeholder, self-governing organization to address them, we expect increased national and international regulatory pressure and will work within organizations like the ITU, where these issues are certain to be raised

September 12, 2012

Brief status: Internet Standards Efforts for Web Security

Hi, Jeff Hodges here.

I gave a brief overview presentation regarding the status of Internet standards efforts regarding web security. at the recent OWASP Bay Area chapter meeting (20-Aug-2012):

Internet Standards and Web Security

The presentation highlights some of the "recent", "emerging", and "in genesis" web security standards (e.g., respectively: HTTP State Management, HSTS, and UI Safety), related & leverageable standards (e.g., DNSSEC, DANE/TLSA), as well as fora where this work is occuring (some which we helped create):

W3C Web App Security Working Group 
IETF WebSec Working Group

It also summarizes some things not yet cohesively approached (e.g., browser security indicators, the Web PKI trust model situation) and suggests some things you can do to overall help move web security forward.

July 23, 2012

CA/Browser Forum Governance Reform Proposals Published

A number of months ago, PayPal approached the CA/Browser Forum with a proposal that that it restructure its governance model to become a more mature and capable organization, able to take on the growing threats and challenges to the CA and HTTPS ecosystem in a more comprehensive manner.  We felt that key features of such a reform included openness, transparency, and an equal seat at the table for site operators and the relying party public that depends on the trustworthiness of the global consumer PKI.

After several months of consideration and discussion, the CABF has now published four governance proposals on their website:  https://cabforum.org/.  We are pleased that all four proposals include a much greater degree of transparency than has historically been the case at the Forum, as well as some ability to accept public input and contributions.  Unfortunately, while all also consider a role for users of certificates, CAs and Browsers remain “more equal” than their customers in the other proposals.  The distinguishing feature of PayPal’s proposal remains equal and proportional representation for constituency groups representing Certification Authorities, Browsers and Users.

Balloting among the current voting members of the Forum to complete the governance reform process will begin shortly.  If you have an interest in the future of secure communications and commerce on the Internet – and we believe everybody does – we encourage you to review the proposals and submit comments to questions@cabforum.org. 

PayPal’s proposal can be found here: http://cabforum.org/governance/PayPal_Governance_Proposal.pdf

-Brad Hill, Ecosystem Security Team, PayPal Information Risk Management

June 21, 2012

Help Keep PayPal Safe – The Bug Bounty Way!

In an effort to keep PayPal the safe and secure online payment company, we are pleased to announce the launch of a bug bounty program!  Our official announcement is here and the the official details of the program can be found here - (https://cms.paypal.com/cgi-bin/marketingweb?cmd=_render-content&content_ID=security/reporting_security_issues).  

PayPal believes in proactively securing our site because we value our customer's privacy and the protection of their financial information.

Like other security programs, we have guidelines that we expect the security researchers to follow:

Please do not disclose the issue to the general public regarding your findings; else, we will not pay you.Do not bring down our site while doing your security research.And definitely do not send us sensitive information in your findings.Now, do allow us reasonable time to research your claim(s).  This process may take some time so please be patient.

We know that there are many vulnerabilities out in the application security world.  For now, our bug bounty program will only accept the following types of bugs: XSS, CSRF/XSRF, SQLi and Authentication By-pass.  However, if you do find an issue that you think we should really know about, submit it to us anyways and we will look into it.  The scope of the program is for bugs found on our main domain – www.paypal.com.  We will expand the program when we deem it to be appropriate.

We are introducing a bounty program - similar to what others out there have done – because we see a long–term benefit to the company when we rely on the larger security community to help us protect our site.  

We welcome your feedback on our bug bounty program and we look forward to your vulnerability bug submissions!

Nam Wu

May 21, 2012

New Research and Progress Against Clickjacking at the W3C

The Web Application Security Working Group at the W3C is nearing one year since its charter was approved. In that time, it’s made some quite important progress. Content Security Policy (CSP) 1.0 is fast approaching Working Group Last Call, version 1.1 is under draft, and Cross-Origin Resource Sharing (CORS) has completed Working Group Last Call with a new security considerations section. Congratulations and thanks to the editors Anne van Kesteren, Adam Barth and Brandon Sterne.

The other threat this WG was chartered to address was clickjacking. Since its flashy debut five years ago, clickjacking and other variants on user interface redressing attacks have remained a quiet thorn in the side of web application authors and security teams. With X-Frame-Options as the only commonly implemented solution to emerge in that time, vulnerable applications are left with the unappealing choice of accepting the risk of clickjacking or opting-out of in-context display altogether. With the increasing popularity of in-context mashups for applications like social networking, shopping and payments, the risks to users and the potential profits for attackers are quite real.

This is why I’m so pleased that at the WebAppSec WG’s second face to face meeting on May 2-3, a real start was made towards effective risk mitigation for many forms of clickjacking. At the end of a session devoted to clickjacking, the WG agreed to proceed towards standardizing a combination of client-side detection heuristics, informed by application-supplied policy hints, with a reporting loop that can feed into server-side anti-fraud systems. Web security pioneer Giorgio Maone, author of NoScript, has generously volunteered to edit the new specification and contribute the techniques he developed with ClearClick. David Lin-Shung Huang will co-edit, bringing his own insights and refinements from research into anti-clickjacking technology at Carnegie-Mellon University Silicon Valley and Microsoft Research.

I’m also happy that some of the research we’ve been doing here at PayPal on anti-fraud analytics and clickjacking will be contributed to the standard, to help close the loop between user agent detection and application response and remove the need for yet another browser security dialog box. You can read more about that work for the first time here:

Adaptive User Interface Randomization as an Anti-Clickjacking Strategy

If you’re interested in helping to fix clickjacking, please follow the WG's progress at public-webappsec@w3.org.

- Brad Hill

 

 

May 10, 2012

Raising the Bar with the Domain Policy Framework

PayPal is pleased to be invited by Artemis Internet Inc. to participate in the Domain Policy Working Group.  PayPal has long been a leader in multi-stakeholder efforts to tackle the problems of crime, fraud and phishing on the Internet.  We have identifed the need for and advocated uniform security policy frameworks to address Web security (Web Security Framework: Problems and Requirements), are early adopters of DNSSEC for secure domain names, and helped develop standards like Domain-based Message Authentication, Reporting & Conformance (DMARC) for email authentication and HTTP Strict Transport Security (HSTS) for secure web site communications.  As such, we support the opportunities presented by the Domain Policy Framework (DPF) for broader adoption of these and other security technologies.

The DPF offers an opportunity to take an important step towards meeting users’ expectations of a safe Internet experience.  Users often have difficulty interpreting technical security indicators (such as the "HTTPS lock icon") and even the technically savvy have no easy way to distinguish between a “secure” site (using HTTPS) and a “safe” one that will not victimize its users.  We are encouraged by the effort to create user-recognizable spaces on the Internet where uniform and modern best practices for security and safety will be enforced.  We also look forward to a time when domain registrants will be held to high standards for truthful and accurate self-identification, trustworthy operation, and protection of users.  As the viability and benefits of such an approach are demonstrated in parts of the new gTLD space, we hope this work will help accelerate efforts to enable a safer Internet everywhere, for all users.

 - Brad Hill

Next»

Archives

July 2021October 2019March 2013November 2012October 2012September 2012July 2012June 2012May 2012March 2012

More...

Categories

Browsers (8) Current Affairs (1) Disclosure (2) Miscellanea (1) Policy (4) Protocols (9) Threats (2) Trends (1) Web/Tech (12)See More

Pages

ConventionsWho are we

About

Subscribe to this blog's feed

Blog Roll

OWASPPogo Was RightEmergent ChaosSecurity Retentive

TAGS:The Security Practice

<<< Thank you for your visit >>>

Websites to related :
Gryphon Scientific &#8211; Scien

  Skip to content About UsOur ClientsOur Contract VehiclesPractice AreasAll-Hazards Preparedness and Response &#038; CBRN DefenseBiosafety, Biosecurity,

The New Mexican

  date_range
Check Back Soon!Need Help? Need assistance?
We're happy to help!

Frequently Asked Questions

The Writing University | The Uni

  Skip to main content The University of Iowa University of Iowa

The Digital Nonprofit

  Skip to contentMain MenuProductivityWebsitesFundraisingMarketingResources Find the best tools for your nonprofitWe review and recommend the best soft

Visit Historic Bethlehem, PA |

   HistoryTHINGS TO DO & EVENTSMUSEUMS & CULTUREOUTDOOR DINING & SHOPPINGSAFETY & TRIP PLANNINGWALKIN

The Clarion Foundation - Welcome

   home &middot; about &middot; clarion blog &middot; donate &middot; apply Clario

The House of Murky Depths

  The House of Murky Depths Publishing the unexpected since 2

World Fantasy Convention | Hom

   Home of the World Fantasy Awards and Convention Skip to content

The Gardner Castle

   "Fisher's Gold,"(c) copyright by Adele Gardner. Kaitlyn Gardner Kaitlyn's latest

The Horror Review

  FacebookTwitterRSSHomeNewsReviewsAboutContactHellNotesBuyZombieHorrorWorldSelect Page .bx-wrapper .bx-viewport { background: none

ads

Hot Websites