netsekure rng

Web Name: netsekure rng

WebSite: http://netsekure.org

ID:106736

Keywords:

netsekure,rng,

Description:

One of the main pieces of functionality in a browser is navigation. It isthe process through which the user gets to load documents. Let us trace thelife of a navigation from the time an URL is typed in the URL bar and theweb page is completely loaded. In this post I will be using the word browser todescribe the program the user sees and not jus the browser process, whichis the privileged one in Chromium s security model.The first step is to execute the beforeunload event handler if a document isalready loaded. It allows the page to prompt the user whether they want toleave the current one. It is useful in cases such as forms, where the resulthas not been submitted, so the form data is not lost when moving to anew document. The user can cancel the navigation and no more work will beperformed.If there is no beforeunload handler registered or the user agreed toproceed, the next step is the browser making a network request to thespecified URL to retrieve the contents of the document to be rendered.Chromium s implementation uses the term provisional load todescribe the state it is in at the start of the network request. Assuming nonetwork level error is encountered (e.g. DNS resolution error, socketconnection timeout, etc.), server responds with data and the response headerscome first. Once the headers are parsed, they give enough information todetermine what needs to be done next.The HTTP response code allows the browser to know whether one of theseconditions has occured:A redirect has been encountered (response 3xx)An HTTP level error has occurred (response 4xx, 5xx)There are two cases where a navigation can complete without resulting in anew document being rendered. The first one is HTTP response code 204 and205, which tell the browser that the response was successful, but there isno content that follows, therefore the current document must remain active.The other case is when the server responds with a header indicating that theresponse must be treated as a download. All the data read by the browser isthen saved to the local filesystem based on the browser configuration.The server can also sent a redirect, upon which the browser makes anotherrequest based on the HTTP response code and the additional headers. It continuesfollowing redirects until either an error or success is encountered.Once there are no more redirects, if the response is not a 204/205 or adownload, the browser reads a small chunk of the actual response data that theserver has sent. By default this is used to perform MIME type sniffing, todetermine what type of response the server has sent. This behavior can besuppressed by sending a “X-Content-Type-Options: nosniff” header as part of theresponse headers. At this point the browser is ready to switch to rendering thenew document. In Chromium s implementation, this term used for this point intime is commit . Basically the browser has committed to rendering the newdocument and remove the old one.However, before the commit is performed, the old document needs to benotified that it is going away, so the browser executes the unload eventhandler of the old document, if one is registered. Once that is complete, theold document is no longer active, the new document is committed, and instrict terms, the navigation is complete.The astute reader will realize that even though I said navigation iscomplete, the user actually doesn t see anything at this point. Even thoughmost people use the word navigation to describe the act of moving from onepage to another, I think of that process as consisting of two phases. So farI have described the navigation phase and once the navigation has beencommitted, the browser moves into the loading phase. It consists of readingthe remaining response data from the server, parsing it, rendering thedocument so it is visible to the user, executing any script accompanyingit, as well as loading any subresources specified by the document. The mainreason for splitting it into those two phases is how errors are handled.This brings us back to the case where the server responds with an errorcode. When this happens, the browser still commits a new document, but thatdocument is an error page it either generates based on the HTTP responsecode or reads as the response data from the server. On the other hand, if asuccessful navigation has committed a real document from the server and hasmoved to the loading phase it is still possible to encounter an error, forexample a network connection can be terminated or times out. In that casethe browser is displaying as much of the new document as it has parsed.Chromium exposes the various stages of navigation and document loadingthrough methods on the WebContentsObserver interfce.NavigationDidStartNavigation - invoked at the point after executing the beforeunload event handler and before making the initial network request.DidRedirectNavigation - invoked every time a server redirect is encountered.ReadyToCommitNavigation - invoked at the time the browser has determined that it will commit the navigation.DidFinishNavigation - invoked once the navigation has committed. It can be either an error page if the server responded with an error code or the browser has switched to the loading phase for the new document on successful response.DidStartLoading - invoked when a navigation is about to start, after executing the beforeunload handler.DocumentLoadedInFrame - invoked when the document itself has completed loading, however it does not mean that all subresources have completed loading.DidFinishLoad - invoked when the document and all of its subresources have been loaded.DidStopLoading - invoked when the document, all of its subresources, all subframes and their subresources have completed loading.DidFailLoad - invoken when the document load failed, for example due to network connection termination before reading all of the response data.Hopefully this post gives a good introduction to navigations in the browserand should be a good base to build on for future posts. Chromium was designed from the very start as a multiprocess browser. Mostpeople think it has one process for each tab and while that is somewhatclose to the truth, the real picture is a bit more complicated. It supportsa few different modes of operation which differ in how web pages areassigned to processes. Those are called process models . It is highlyrecommended to read thepreviouspostsintroducing some basic concepts used by Chromium, which I will use toexplain how the different process models work.Chromium uses the operating system process as a unit of isolation. It usesBlink to render web documents, which it runs in restricted rendererprocesses. The sandbox does not allow any renderer processes to communicatebetween each other and the only way to achieve that is to use the browserprocess as an intermediary. This design allows us to isolate web pages fromeach other and potentially have a different level of privileges for eachprocess.Before delving into the actual models the browser supports, there are coupleof more bits of detail to cover - cross-process navigation and SiteInstancecaveats.Cross-process navigationA tab in the browser UI gets a visual representation of the web page fromthe renderer process and draws it as its content. When navigating from onepage to another, the browser makes a network request and gives the responseto Blink for rendering. Often, the same instance of the rendering engine,running in the same process, is used. However, in many cases, navigationscan result in a new renderer process being created and a brand new instanceof Blink being instantiated. The response is handed off to the new rendererprocess and the tab is then associated with the new process.The ability to perform cross-process navigations is a core part ofChromium’s design. It incurs the cost of starting a new process, however italso improves performance, as using a new process has a clear memory space,free of fragmentation. Abandoning the old process can be quick (processkill) and also helps mitigate memory leaks, as that process exits and itsmemory is released back to the operating system. Most importantly, though,changing processes under the hood is a key building block of the securitymodel.Chromium’s security model also allows for different privilege levels forcontent being rendered. In general any content coming from the web isconsidered lowest privilege level. Chromium internal pages, such aschrome://settings, require more privileges, as they need to read or modifysettings or data available only in the browser process. The security modeldoes not allow pages from different privilege levels to use the sameprocess, so a cross-process navigation is enforced when crossing privilegelevel.SiteInstance caveatsPrevious posts in this series described SiteInstance and said that in theexample setup, all of a.com, b.com, c.com, d.com were SiteInstances. This isthe ideal model to use and is the goal of the Site Isolation project, butcurrently Chromium does not reflect this in reality. Here is a list ofcaveats that apply to the default Chromium configuration at the time of thispost:SiteInstance is assigned only to the top-level frame by default. Subframesshare the same one as the main frame.SiteInstance does not always reflect the URL of the current document. Oncethe SiteInstance URL is set, it doesn’t change, even though the frame cannavigate across many different Sites. However, SiteInstance can changewhen navigating cross-site.[1]Chromium avoids process swaps on cross-site renderer-initiated navigations(e.g. link clicks) because those would be likely to break script calls onwindows that expect to communicate with each other (and thus breakcompatibility with the web). In contrast, it tends to use process swaps oncross-site browser-initiated navigations (e.g. typing URL in the omnibox)because the user is making an effort to leave the site, so it s not as badto break the script calls.Process modelsTo help illustrate the difference between the different process models, Ihave included screenshots of the Chromium Task Manager showing the processesand what URLs they are rendering. The setup I have used is the following:Tab navigated tohttp://tests.netsekure.org/main-b-c.html.The main document includes two iframes - one tohttps://google.com and one tohttps://github.com. It also contains a button thatopens a new tab through window.open() call. It causes the new tab to be inthe same BrowsingInstance as the tab that opened it.Newly opened tab from the initial tab, which is navigated tohttp://tests.netsekure.com/main-sub.c.html.Notice that it is different top level domain from the initial tab (.org vs .com).It includes an iframe to https://pages.github.com.User opened tab navigated to http://tests.netsekure.com/empty.htmlUser opened tab navigated to http://tests.netsekure.com/to_slow.htmlThis is a mode in which Chromium does not use multiple processes. Rather, itcombines all of its parts into a single process. It is also a mode in whichthere is no sandboxing, as the browser needs access to both the network andthe filesystem. It exists mainly for testing and it should never beused!Process per tabThis process model is the simplest one to understand and is what most peopleintuitively think is the mode of operation of the browser. Each tab gets adedicated sandboxed process that runs the Blink rendering engine.Navigations do not usually change processes. Note however that since thesecurity model does not allow for content with different privileges to livein the same process, it does actually change processes on privilege change.An example would be navigation fromhttps://dev.chromium.org to chrome://settings.Process per SiteIn this process model, each Site gets mapped to a single process. Whenmultiple tabs are navigated to the same Site, they will all share the sameprocess. Navigations can change processes.It is not the default model, since running multiple tabs with heavy webpages, such as Google Docs, leads to low performance - too much contentionon the main thread, memory fragmentation, etc.Process per Site instanceThis is the default process model for Chromium. Each SiteInstance is mappedto a process by default. Multiple tabs navigated to the same Site end up inseparate SiteInstances, therefore they reside in separate processes.Navigations also can change processes. All the SiteInstance caveats applyand not the idealized version of SiteInstance.Site per processThis is an experimental process model for developing the Site Isolation project. It comes closer to the desired design for Chromium, where there isa SiteInstance for each frame. Additionally, it is using the idealizeddefinition of SiteInstance, where only URLs from the same SiteInstance canbe loaded in the same process. Navigations can change processes in any frameon a page, whereas all other process models support changing processes onlyon the top frame.I hope these posts have helped demystify a bit how the Chromium makesdecisions on which process to use for specific tab and URL. If there areother clarifications I can make, feel free to ping me over on Twitter and Iwould be happy to. In a previous post, I covered the basic securityprincipal that Chromium uses for its security model. The goal of this postis to outline few details that are vital to understand the limitationsimposed on the process model. It will look at somewhat obvious parts of theweb platform framed in HTML spec speak.When a browser is navigated to an URL, it makes a network request to theserver specified for the document identified in the URL. The response is adocument *, which is then parsed and rendered in a window. Those should befamiliar, since they correspond to the identically named objects inJavaScript. This holds true for iframes as well, which have their own windowobjects, which host the respective documents. The HTML spec uses differentnaming for window - “browsing context”, while it keepsdocument as the same concept. There are a few types defined by the standard:top-level browsing context - the main window for a pagenested browsing context - window embedded in a different window, for example through iframe tagauxiliary browsing context - a top-level browsing context “related” toanother browser context, or put in simpler speak - any window createdthrough window.open() API, or a link with target attribute.I will use frame to refer generically to any browsing context - be it a pageor an iframe, as they are basically the same concept with two differentnames based on the role they play.There are two concepts the HTML spec defines that are important tounderstand. The first one is “reachable browsing context”.This is somewhat intuitive, as all frames that are part of a web page arereachable to each other. In JavaScript this is exposed through thewindow.parent and window.frames properties. In addition, related browsingcontexts are reachable too, by using the return value of window.open() andthe window.opener property. For example, if we have a page with two iframes,which opens a new window with an iframe, then all of the frames arereachable.The set of reachable frames - all of them in the above case - form the otherconcept the standard defines - “unit of related browsing contexts”.It is important because documents that want to communicate with otherdocuments are allowed to do so only if they are part of the same unit ofrelated browsing contexts. Internally, the Chromium source code uses theBrowsingInstance class to represent this concept. For the sake of brevity,I’ll use this name from here on.When two documents want to communicate with each other, they need to have areference to the window object of the target document. Any frame in aBrowsingInstance can get a reference to any other frame in the sameBrowsingInstance since they are all reachable by definition.How documents can interact with each other is governed by the same origin policy.When documents are from the same origin or can relax theirorigin to a common one, they are allowed to access each other directly.Cross-origin documents on the other hand are not allowed such access. So aBrowsingInstance can be split in sets of frames and grouped by the originthey are from. But recall that we can’t easily use theorigin as a security principle in Chromium. This is why we use the conceptof SiteInstance - the set of frames in a BrowsingInstance which hostdocuments from the same Site. It is vital to remember that the Chromiumbrowser process makes all of its process model and isolation decisions basedon SiteInstance and not based on origins.The HTML spec requires all same origin documents, which are part of the sameunit of related browsing contexts, to run on the same event loop - or inother words the same thread of execution within a process. This means thatall frames which are part of the same SiteInstance must execute on the samethread, however different SiteInstances can run on different ones. In theexample above, the two pages are in the same BrowsingInstance because they arerelated through the window.open() call. The different SiteInstances shouldbe for a.com, b.com, c.com, d.com.Overall it all boils down to the following rules that Chromium needs to abide by:All frames within a BrowsingInstance can reference each other.All frames within a SiteInstance can access each other directly and must run on the same event loop.Frames from different SiteInstances can run on separate event loops.Phew! Now there is enough background to start delving into the details ofChromium s implementation of these concepts from the HTML spec and itsprocess allocation model.* Unless the result is a file to be downloaded or handled by external application. I have seen many people versed in technology and security make incorrectstatements about how Chromium’s multi-process architecture works. The mostcommon misconception is that each tab gets a different process. In reality,it is somewhat true, but not quite. Chromium supports a few different modesof operation and depending on the policy in effect, process allocation isdone differently.I decided to write up an explanation of the default process model and how itactually works. The goal is for it to be comprehensible to as many people aspossible, not requiring a degree in Computer Science. However, basicfamiliarity with the web platform (HTML/JS) is expected. In order to get toit, there are some concepts that need to be defined, so this is the firstpost in a series, which will explain some of Chromium’s internals anddemystify some parts of the HTML spec.I have found the easiest mental model of the Chromium architecture to bethat of an operating system - a kernel running in high privilege level and anumber of less privileged usermode application processes. In addition, theusermode processes are isolated from each other in terms of address spaceand execution context.The equivalent of the kernel is the main process, which we call the “browserprocess”. It runs with the privileges of the underlying OS user account andhandles all operations that require regular user permissions - communicationover the network, displaying UI, rendering, processing user input, writingfiles to disk, etc. The equivalent of the usermode processes are the varioustypes of processes that Chromium’s security model supports. The most commonones are:Renderer process - used for parsing and rendering web content using the Blink rendering engineGPU process - used for communicating with the GPU driver of the underlying operating systemUtility process - used for performing untrusted operations, such as parsing untrusted dataPlugin process - used for running pluginsThey all run in a sandboxed environment and are as locked down as possible forthe functionality they perform.In the modern operating systems design, the principle of least privilegeis key and separation between different user accounts is fundamental.User account is a basic unit of separation and I would referto from here on to this concept as “security principal”. Each operatingsystem has a different way of representing security principals, for exampleUIDs in Unix and SIDs in Windows, etc. On the web, thesecurity principal is the origin - the combination of the scheme,host, and port of the URL the document has originated from. Access controlon the web is governed by the Same Origin Policy (SOP), whichallows documents that belong to the same origin to communicate directly witheach other and access each other synchronously. Two documents that do notbelong to the same origin cannot access each other directly and can onlycommunicate asynchronously, usually through the postMessage API.Overall, the same origin policy has worked very wellfor the web, but it also has some quirks, which make it unsuitable to treatorigins as the security principal for Chromium.The first reason comes from the HTML specification itself. Itallows documents to “relax” its origins for the purpose of evaluating sameorigin policy. Since the origin contains the full domain of the host servingthe document, it can be a subdomain, for example “foo.bar.example.com”. Inmost cases, however, the example.com domain has full control over all of itssubdomains and when documents that belong in separate subdomains want tocommunicate directly, they are not allowed due to the restrictions of sameorigin policy. To allow this scenario to work, though, documents are allowedto change their domain for the purposes of evaluating SOP. In the caseabove, “foo.bar.example.com” can relax its domain up to example.com, whichwould allow any document on example.com itself to communicate with it. Thisis achieved through the “domain” property ofthe document object. It does come with restrictions though.In order to understand the restrictions of what document.domain can be setto, one needs to know about the Public Suffix List and how it fits inthe security model of the web. Top-level domains like “com”, “net”, “uk”,etc., are treated specially and no content can (should) be hosted on those.Each subdomain of a top-level domain can be registered by different entityand therefore must be treated as completely separate. There are cases,however, where they aren’t a top-level domain, but still act as such. Anexample would be “co.uk”, which serves as a parent domain for commercialentities in the UK to register their domains. Because those cases areeffectively in the role of a top-level domain, but are not one, the publicsuffix list exists as a comprehensive source for browsers and other softwareto use.Now that we know about PSL, let’s get back to document.domain. A documentcannot change its domain to be anything completely generic or veryencompassing, such as “.”. Browsers allow documents to relax their domain upthe DNS hierarchy. To use the example from above, “foo.bar.example.com” canset its domain to “bar.example.com” or “example.com”. However, since “.com”is a top-level domain, allowing the document to set its domain to “.com”will lead to security problems. It will allow the document to potentiallyaccess documents from any other “.com” domain. Therefore browsers disallowsetting the domain to any value in the Public Suffix List and enforce thatit must be a valid format of a domain under one of the entries in the PSL.This concept is often referred to as “eTLD+1” - effective top-level domain(a.k.a. entry in the PSL) + one level of domains under it. I will use thisnaming for brevity from here on.It is this behavior defined by the HTML spec allowing documents to changetheir origins that gives us one of the reasons we cannot use the origin as asecurity principal in our model. It can change in runtime and securitydecisions made in earlier point in time might no longer be valid. Theconsistent part that can be taken from the origin is only the eTLD+1 part.The next oddity of the web is the concept of cookies. It isquite possibly the single most used feature of the web today, but it has itsfair share of strange behaviors and brings numerous security problems withit. The problems stem from the fact that cookies don’t really play very wellwith origins. Recall that origin is the tuple (scheme, host, port), right?The spec however is pretty clear that “Cookies do not provide isolation byport”. But that isn’t all, the spec goes to the next paragraph and says“Cookies do not provide isolation by scheme”. This part has been patched upas the web has evolved though and the notion of “Secure” attribute oncookies was introduced. It marks cookies as available only to hosts runningover HTTPS and since HTTP is the other most used protocol on the web, thescheme of an origin is somewhat better isolated and port numbers arecompletely ignored when cookies are concerned. So basically it is impossibleto use origin as a security principal to use and perform access controlsagainst cookie storage.Finally there is enough background to understand the security principal usedby Chromium - site. It is defined as the combination of scheme and the eTLD+1part of the host. Subdomains and port numbers are ignored. In the case ofhttps //foo.bar.example.com:2341 the effective site for it will behttps //example.com. This allows us to perform access control in a webcompatible way while still providing a granular level of isolation. One really nice thing about Chromium is its source code is open and released underthe BSD license. This allows people to reuse code, extend the browser, orfully fork the project. Each of those are probably worthy of a blog post onits own, but I will focus only on the last one.Taking Chromium and forking it is fairly easy process, just clonethe repository. Make all the changes you would like to do -add missing features, include enhancements, create a totally new UI - it isonly limited by one s imagination. Building the binary from the source codeis a little bit laborious, though not too hard. It does take beefy hardwareand some time. Once it is built, publishing it is deceptively easy.However, what comes next?Software in today s world is not static. As a colleague of minelikes to say - it is almost like a living organism and continuously evolves.There is no shipping it as it was the norm in the 90s. The web is in aconstant release mode and its model of development has trickled to clientside software - be it desktop or mobile apps. Chromium has adopted this modelfrom its initial release and is updating on a very short cycle - currentlyaveraging six weeks between stable releases and two weeks betweenintermediate stable updates. It is this constant change that makes forkingit a bit more challenging. However, there are few steps that one can take toensure a smoother ride.InfrastructureWith constantly changing codebase, having a continuous build system is a mustfor project as big as Chromium and is very useful even for much smallerprojects. Setting one up from the get go will be tremendously useful ifthere is more than one developer working on the code. Its value is evenhigher if the project needs to build on more than one platform.What is more important and I would argue it is a must - using a continuousintegration system. Running tests on each commit (or thereabout) to ensurethere are no breaking changes. It is a requirement for any software projectthat needs to be in a position to release a new version at any point intime.The system used in the Chromium project - buildbot - is actually opensource and can be adapted to most projects.Making changesThe most important action one can take when forking Chromium is to study thedesign of the browser before diving in and making anychanges. There are multiple components and layers involved, which interactthrough well defined interfaces. Understanding the architecture and thepatterns used will pay off tremendously in the long run.Chromium has two main component layers - content and chrome. The former iswhat implements the barebones of a browser engine - networkingstack, rendering engine, browser kernel, multiprocess support, navigationand session history, etc. The chrome layer is built on top of content toimplement the browser UI, extensions system, and everything else visible tothe user that is not web content.Each layer communicates with the upper ones through two main patterns -observer and delegate interfaces. Using those interfaces should be thepreferred way of extending the browser and building on top of it. Wheneverthis is not possible, changes to the core are needed. I would stronglysuggest preferring to upstream those, if possible of course. It will makemaintaining the fork much easier by reduing the burden of keeping up withchanges and also shares the improvements with the whole community!Finally, do yourself a favor to keep you sane in the long run - write testsfor all the features you are adding or changes made. It is the only way toensure that long term the regressions and bug rate is manageable. It willsave your sanity!Keep it movingThe Chromium codebase changes constantly and gets around 100 commits eachday. The sane way to keep up with the rate of change is to rebase (or merge)your code on tip-of-tree (ToT) daily or at most weekly. Letting more timelapse makes resolving conflicts a lot harder.Updating the install base is key to long term success. The update cient usedin Chrome on Windows, called Omaha, is also open source. The serverside code is not available, though, since it depends heavily on how Google sinternal infrastructure is setup. However the protocol used to communicatebetween the client and the server is publicly documented.Development for Chromium relies quite a bit on mailing lists. Subscribing tothe two main ones - chromium-dev@chromium.org andblink-dev@chromium.org - isvery helpful. It is place where major changes are announced, discussion ondevelopment happens, and questions about Chromium development are answered.The security team has a dedicated list for discussions - security-dev@chromium.org.Keep it secureSecurity is one of the core tenets of Chromium. Keeping up with securityfixes can be a challenging task, which is best solved by keeping your codealways rebased on tip-of-tree. If this is not possible, it is best tosubscribe to the list. It is the communicationmechanism the security team uses to keep external projects based on Chromiumup-to-date with all the security bugfixes happening in the project.PluginsThe web is moving more and more to a world without plugins. For me, this isa very exciting time, as plugins usually tend to weaken the browsersecurity. There are two plugins bundled with Chromium to produce Chrome -Adobe Flash Player and a PDF viewer. The latter is now an open source project of its own- PDFium. It can be built and packaged with Chromium, though thesame care should be taken as with the browser itself - keep it up-to-date. Overall, maintaining a fork of Chromium isn t trivial, but it isn timpossible either. There are a bunch of examples, including the successfulmigration of the Opera browser from their own rendering engine to buildingon top of the Chromium content module.Last, but not least - feel free to reach out and ask questions or advice. The topic of this blog post has been long on my mind, but I did not have agood example to use. Finally I found one.Software security is a very complex field and a asymmetric problemspace. Arguments whether defense or offense is harder have been fought fora long time and they will likely never stop. I think most of us can agree onthose two statements:offense needs to find only a handful of problems and work hard to turn theminto a compromisedefense needs to architect software to be resilient and work hard to ideally (thoughnot practically) not introduce any problemsEach side requires unique skills and is extremely rare for people to bereally good at both. What really irks me is that lots of people in thesecurity industry tend to bash the other side. It is easy, one understandstheir problem space very very well and knows how hard it is to be an expert.Also, it feels that the other side is not that hard. I mean, how hard can itbe, right? Wrong!In this post I will pick the side of the defender, since this is where I spend most of my time.The example I will use is the recent events with the Aviator browser, because it is near and dear to my heart.One thing I want to make clear from the get go - I totally respect theirefforts and applaud them for trying. Forking Chromium is not a small featand not for the faint of heart. The goals for Aviator are admirable and wedefinitely need people to experiment with bold and breaking changes. It isthrough trial and error that we learn, even in proper engineeringdisciplines :). What can we use more of the security industry?Humbleness!It is no surprise people on the offensive side bash software developers for stupid mistakes, since the grass is always greener on the other side. The problemis that many trivialize the work required to fix those mistakes. Some areindeed easy. Fixing a stack-based buffer overflow is not too hard. In othercases, it is harder due to code complexity or just fundamental architectureof the code.What humbles me personally is having tried the attack side. It is not toobad if you want to exploit a simple example problem. Once you try toexploit a modern browser, it is a completely different game. I admireall the exploit writers for it and am constantly amazed by their work. Samegoes for a lot of the offensive research going on.I have secretly wished in the past for some of the offensive folks to tryand develop and ship a product. When WhiteHat Security released the Aviator browser,I was very much intrigued how it will develop. It is not a secret thatJeremiah Grossman and Robert Hansen have given lots of talks on how the webis broken and how browser vendors do not want to fix certain classes ofissues. They have never been kind in their remarks to browser vendors, butnow they have become one. I watched with interest to see how they havemitigated the issues they have been discussing. Heck, I wanted to seeclickjacking protection implemented in Chromium, since it is the authors ofAviator that found this attack vector and I have personally thoughtabout that problem space in the past.Chris Palmer and I have played around with the idea of Paranoid Mode inChromium and as a proof of concept we have written Stannum (source) to see how far wecan push it through the extensions APIs. It is much safer to add features toChromium using extensions than writing C++ code in the browseritself1.So when Aviator was announced and released initially, I reached out toWhiteHat Security to discuss whether the features they have implemented inC++ could be implemented through the extensions API. My interest wasprimarily motivated by learning what they have done and what are the limitationsof the extensions subsystem. Unfortunately, the discussion did not go far :(.Where do I believe they could have done better? You might have guessed it right - beinghumble. The marketing for Aviator is very bold - the most secure andprivate Web browser available . This is a very daring claim to make, hard promise to uphold and anyone who has been insecurity should know better. Securing a complex piece ofsoftware, such as a browser, is a fairly hard task and requires lots ofdiligence. It takes quite a bit of effort just to stay on top of all thebugs being discovered and features committed, let alone develop defenses andmitigations.Releasing the source for Aviator was a great step by WhiteHat. It gives us agreat example to learn from. Looking at the changes made, it is clear thatmost the code was written by developers who are new to C++. When making suchbold statments, I would have expected more mature code. Skilled C++developers that understand browsers are rare, but it is a problem that can be solved.It takes a lot of time, effort and desire for someone to learn to use the language and most importantly understand the architecture of the browser.Unfortunately, I did not see any evidence that whoever wrote the Aviatorspecific code did any studying of the source code or attempted to understandhow Chromium is written and integrate the changes well.What really matters at the end of the day, though, is not the current stateof a codebase. After all, every piece of software has bugs. I believe there isone key factor which can determine long term success or failure:Attitude!Security vulnerabilities are a fact of life in every large enough codebase. Even in the project I work on we have introduced code thatallowed geohot to pull off his total ChromeOS pwnage!We owned up to it, the bug was fixed and we looked around to ensure we did not miss other similar instances.However, what I was most disappointed by was the reaction from WhiteHat when a criticalvulnerability was found in the Aviator specific code: Yup, Patches welcome, it’s open source. 2Our industry would go further if we follow a few simple steps:Do not trivialize the work of the opposite side, it is more complex than it appears on the surface.When working on a complex software or problem, study it firstShare ideas and collaborateOwn up to your mistakesBe humble1. Even Blink is starting to implement rendering engine features in JavaScript.2. Nevermind that there is no explanation on how to build Aviator, so one can actually verify that the fix works. Update (2014-09-24): It was decided that the isolated apps experimentalfeature has some usability problems and will not be shipping in Chrome. Assuch, this functionality either no longer exists or is most likely brokenand should not be used. I m leaving the post for historical reference.I have been using separate browsers for a while now to isolate generic web browsing from high value browsing, such as banking or administration of this blog. The reason I ve been doing this is that a compromise during generic web browsing is going to be isolated to the browser being used and the high value browser will remain secure (barring compromise of the underlying OS).Recently I ve decided to give the experimental Chrome feature - "isolated apps" - a try, especially since I ve recently started working on Chrome and will likely contribute to taking this feature to completion. Chrome already does have a multi-process model in which it uses different renderer processes, which if compromised, should limit the damage that can be done to the overall browser. One of the limitations that exists is that renderer processes have access to all of the cookies and other storage mechanisms in the browser (from here on I will only use cookies, though I mean to include other storage types as well). If an attacker can use a bug in WebKit to get code execution in the renderer process, then this limitation allows requesting your highly sensitive cookies and compromising those accounts. What isolated apps helps to solve is isolating storage for web applications from the generic web browsing storage, which helps solve the problem of a compromised renderer stealing all your cookies. In essence, it simulates running the web application in its own browser, without the need to branch out of your current browser. The aim of this blog post is not to describe how this will work, but how to take advantage of this feature. For the full details, read the paper by Charlie Reis and Adam Barth (among others), which is underlying the isolated apps work.In the spirit of my 30 days with experiments, I created manifests for the financial sites I use and for my blog. I wanted to see if I will hit any obvious breaking cases or degraded user experience with those high value sites. A sample manifest file looks like this:{ name : netsekure blog , version : 1 , app : { urls : [ *://netsekure.org/ ], launch : { web_url : https://netsekure.org/wp-admin/ isolation : [ storage ] permissions : [ experimental ]}The urls directive is an expression defining the extent encompassed by the web application.The web_url is the launch page for the web app, which provides a good known way to get to the application.The isolation directive is instructing Chrome to isolate the storage for this web app from the generic browser storage.Once the manifest is authored, you can place it in any directory on your local machine, but ensure the directory has no other files. To actually take advantage of this, you need to do a couple of things:Enable experimental APIs either through chrome://flags or through the command line with  enable-experimental-extension-apis.Load the manifest file as an extension. Go to the Chrome Settings page for Extensions, enable Developer Mode , and click on Load unpacked extension , then navigate to the directory where the manifest file resides and load it.Once you have gone through the above steps, when you open a new tab, it will have an icon of the isolated web application you have authored. You can use the icon to launch the web app, which will use the URL from the manifest and will run in a separate process with isolated storage.Now that there is an isolated app installed in Chrome, how can one be assured that this indeed works? There are a couple of things I did to confirm. First, when a Chrome web app is opened, the Chrome Task Manager shows it with a different prefix. Generic web pages start with Tab: followed by the title of the currently displayed page. The prefix for the apps is App: , which indicates that the browser treats this tab as a web application.In addition to seeing my blog being treated differently, I wanted to be sure that cookies are not shared with the generic browser storage, so I made sure to delete all cookies for my own domain in the Cookies and Other Data settings panel. As expected, but still to my surprise, the site continued functioning, since deleting the cookies only affected the general browser storage and my isolated app cookies were not cleared. This intrigued me as to where those cookies are being stored. It turns out, since this is still just an experimental feature, there is no UI to show the storage for the isolated app yet. If you want to prove this to yourself, just like I wanted to, you have to use a tool to let you peek into a SQLite database, which stores those cookies in a file very cleverly named - Cookies. The Cookies db and the cache are located in the directory for your current profile in a subdirectory Isolated Apps followed by the unique ID of the app, as generated by Chrome. You can find the ID on the Extensions page, if you expand to see the details for the web app you ve installed . In my case on Windows, the full directory is %localappdata%\Google\Chrome\User Data\Default\Isolated Apps\dgipdfobpcceghbjkflhepelgjkkflae . Here is an example of the cookies I had when I went and logged into my blog:As you can see, there are only two cookies, which were set by WordPress and no other cookies are present.Now, after using isolated apps for 30 days, I haven t found anything that was broken by this type of isolation. The sites I ve included in my testing, besides my blog, are bankofamerica.com, americanexpress.com, and fidelity.com*. The goal now is get this to more usable state, where you don t need to be a Chrome expert to use it ;).* Can t wait for all the phishing emails now to start arriving ;)

TAGS:netsekure rng 

<<< Thank you for your visit >>>

One of the main pieces of functionality in a browser is navigation. It is the process through which the user gets to load documents. Let us trace the …

Websites to related :
The Association of Religion Data

  Data Archive The archive is a collection of surveys, polls, and other data submitted by the foremost scholars and research centers in the world. Revie

Ayojajan Food Guide - Info Kulin

  "Website yang satu ini membantu bgt,, secara gw suka bgt jajan,Heeeee & wisata kuliner, jadi biar ga kelewatan tiap main ke luar kota cari tempat2 jaj

dweir's meteorite studies

  A Systematic Classification of Meteorites. I'm sorry, but my page doesn't support non-frame browsers. You can get a newer browser for free at either M

Homepage - NLG - Collegamenti TE

  * Il totale pax + accessori della prenotazione non può essere maggiore di 9 per tratta Utilizziamo i cookie per migliorare il nostro sito e la vostra

Navegador no soportado

  Est s visualizando una versi n adaptada del sitio con s lo la informaci n de contacto. Ir de todas formas al sitio web original

Vapor Cigarettes | 2020s Top Vap

  Welcome to Vapor Cigarettes - your online review site for the best and top-rated vapor cigarette brands in the industry today. We are a group of highl

Metal Squadron | Too True To Be

  Showing great consistency on their recordings, Dark Forest, has slowly, but surely grown into one of my favourite acts formed in the new millennium. T

Glossophilia

  https://www.glossophilia.org/podcast-player/13867/home-is-where.mp3Download file | Play in new window | Duration: 25:18 | Recorded on May 9, 2020The s

Deer Valley Composite Squadron 3

  MissionSupporting America’s communities with emergency response, diverse aviation and ground services, youth development, and promotion of air, space

Geologica Carpathica

  International Geological Journal - Official Journal of the Carpathian-Balkan Geological Association Geologica Carpathica is published six times a year

ads

Hot Websites