James Nachbars Blog | Programming and Plastic Surgery

Web Name: James Nachbars Blog | Programming and Plastic Surgery

WebSite: http://nachbar.name

ID:239813

Keywords:

Blog,Nachbars,James,Programming,Surgery,Plastic,and,

Description:

keywords:
description:
C++ Builder XE3 Declaration terminated incorrectly System.ZLib.hpp Posted on by nachbar Reply

Using Fast-Reports Enterprise edition, just dropping an frxReportServer component on a blank form and compiling, the XE3 compiler chokes with the following:

Error System.ZLib.hpp E2040 Declaration terminated incorrectly on a line that says extern DELPHI_PACKAGE char *ZLIB_VERSION;

The include file train goes:

frxServer.hpp = frxServerSessionManager.hpp = frxServerReports.hpp = frxExportODF.hpp = frxZip.hpp = System.ZLib.hpp

The solution is simply to include the System.ZLib.hpp BEFORE the frxServer.hpp .  Since the latter is added to the .h file automatically, just put:

#include System.Zlib.hpp

first, and the program complies, links, and runs.

Posted in VCL, xe3 | Leave a reply
Gnostice eDocEngine does not work with C++ Builder XE3 Posted on by nachbar 1

Update, February 25, 2014:  I just tested again, using a fresh install of RAD Studio XE5 Update 2, into which no other components had been installed.  I downloaded the trial version of eDocEngine from http://www.gnostice.com/downloads.asp today, and ran the install into RAD Studio XE5.  I did not run any of the interface installers.  I then created a C++ Builder VCL project with one form, and dropped a TgtPDFEngine component on the form.  I then dropped a TButton, and in the click handler, I added

gtPDFEngine1-BeginDoc();

The program compiles, links, and runs, but when the button is clicked, I get a PDF Setup dialog.  When I click OK without making any changes in the dialog, I get an error:  Project Project1.exe raised exception class $C0000008E with message floating point divide by zero at 0x006d5a5a..  Thus, as of this point, eDocEngine remains unusable in C++ Builder XE5 Update 2.  Gnostice support says they have a workaround, but it requires turning off Runtime Packages, which, with my application, prevents it from running.

I have been using Gnostices pdf generation products for a while now, using C++ Builder XE.  Recently, I wanted to move to the current version of C++ Builder, version XE3.  I installed a new version of Windows 7 under Parallels on my Mac, and installed RAD Studio XE3, and the various packages I use, keeping snapshots along the way.  Ultimately, I built my application, and with a few manageable changes, it compiled and ran.  However, when I created the form that contained the PDF Engine component from eDocEngine and built the project, the resulting program crashes with a floating point divide by zero error when the form is created.

That was using eDocEngine 3, the most current version available when I was migrating, in January 2013.  I submitted a support request to Gnostice on Jan 18, 2013, about multiple install problems (see below), and a separate support request on Jan 22, 2013 with the show-stopper floating point divide by zero error, and although they have sent a few confirmations, and stated that they can reproduce the problem, they have not found a solution, and eDocEngine 3 still does not work with C++ Builder XE3.

In early February 2013, Gnostice released eDocEngine version 4, and so I tried installing that.  The main installation program worked (although the importers still fail).  Unfortunately, a C++ Builder XE3 project onto which a TgtPDFEngine component is dropped will not compile (the first error is an #include of NtDef.h, which does not exist in my system it appears to be included by a JEDI file).  I managed to get a test program to compile by removing that #include, and substituting fragments of NtDef.h and other files for the missing include file, and ultimately got my simple test program (a TgtPDFEngine component on an empty form, with no code whatsoever) to compile, build, and start, but the same floating point divide by zero error occurs on startup.  Thus the same problem appears to occur with eDocEngine 4.  I submitted a support request regarding that on Feb 9, 2013.

I did some additional testing on eDocEngine 3 under XE3, which I reported here: https://forums.embarcadero.com/message.jspa?messageID=527312 .  It appears that the problem is that, in XE3, the components are not being correctly initialized when linked for C++, although they are for Delphi.  One of the variables that should be initialized, thus, is not, and is still zero.  When it is used, the divide by zero error occurs.

My environment for testing is a brand new version of Windows 7, with a brand new installed and registered RAD Studio XE3.  Because I am using a virtual machine (Parallels on a Mac), I can use snapshots to make sure that there is no other software or component installed, so I know that the problem is with eDocEngine.  All that is required to produce the problem is dropping a TgtPDFEngine component on the form of a C++ project.  No code or other components are required.  If a TgtPDFEngine component is dropped on the form of a Delphi XE3 project, the program compiles and runs.  I thought I might try controlling the TgtPDFEngine component through a Delphi form, but no luck: if you create a C++ project, and then add a Delphi form and drop the TgtPDFEngine component on the Delphi form, the same divide by zero error occurs on program startup.

At this point, it has been well over a month since I brought this to Gnostices attention, with no resolution or indication that they have any clue as to how to fix it.  I cannot do any development with my new XE3 environment until they do, or alternatively I find another PDF library to use instead.  The other, non-PDF components do not seem to have this problem.  If you have any suggestions, please make a comment below.

Here is the text of my original support request, noting multiple problems with the eDocEngine installation routines under C++ Builder XE3 I sent this request to them on Jan 18, with no substantive response as of yet (Feb 28, 2013)

I just installed the newest version of the three VCL products, and had some difficulties accessing them using C++

The eDocEngine VCL installation completed without issue until I came to the exporters.  I wanted to use the FastReports, DevExpress, HTMLViewer, and TRichView / ScaleRichView exporters.  All four that can be installed using your exporter generator failed immediately, and the error log that the generator said would be created was not created (and no file with that filename was created anywhere within my system)

So, THE FIRST PROBLEM is that the exporter generator fails completely, and THE SECOND PROBLEM is that the exporter generator does not create any log file.

I do have all five underlying components installed into my RAD Studio XE3 through Delphi, but with C++ files created, so that I can use them in C++ as well I initially did that for Gnostice, since Gnostice could not use the components when installed directly into C++ (see my blog post for details: http://www.nachbar.name/2011/10/14/installing-gnostice-edocengine-in-c-builder-with-trichview/ ).  The exception is DevExpress, which just has a single installation program that installs the components into both personalities, without any options in the current version, so I have DevExpress installed using their installation program.  I am using the most current version of DevExpress, v2012 vol 2.2 .

I manually generated all five exporters per your instructions, modifying the project files to generate all C++ Files, including package libs (as I have for all of the components I have discussed, so I can use them in C++ see http://www.nachbar.name/2011/07/16/compiling-thtmlviewer-to-use-in-c-builder-xe/ for more details on how I had generated HTMLViewer.)  Of note, your installation packages do not have an XE3 version for the HTMLViewer exporter, so I used a modified version of your XE2 version of the exporter project, so THE THIRD PROBLEM is that you do not include a project file for the HTMLViewer exporter under XE3.

All five of the exporters installed, and all but the DevExpress one can be dropped on a C++ application and the application will compile and run.  All five can be dropped in a Delphi application and the application will compile and run.

So, THE FOURTH PROBLEM is that the DevExpress exporter component causes a compile error when merely dropped on an empty form by itself.  The exact error is Declaration terminated incorrectly, encountered when parsing gtXPressPrntIntf.hpp after having parsed vcl.h, System.Classes.hpp, Vcl.Controls.hpp, Vcl.StdCtrls.hpp, Vcl.Forms.hpp, gtClasses3.hpp, and gtXportIntf.hpp .  The error occurs on line 282 of System.ZLib.hpp, while reading the extern DELPHI_PACKAGE char *ZLIB_VERSION; line

Other problems encountered in the exporter generator process: THE FIFTH PROBLEM is that many of the exporters have the old names for the units in the .pas files, including FastReport, which uses Graphics (which has been renamed to Vcl.Graphics), Controls, StdCtrls, and Dialogs .  By renaming those, I was able to get that exporter to compile.  The other exporters had similar problems, and required the source .pas files to be changed to get them to compile in XE3.

Then, I installed PDFToolkit VCL.  The installation went without problem, and I can drop the TgtPDFViewer on a C++ form and get the application to compile and run.  Same for TgtPDFDocument, and TgtPDFPrinter.  However, if I drop a TgtOutlineViewer on the form I get a link error: Unable to open file GTPDFOUTLINEVIEWER.OBJ .  If I drop a TgtPDFSearchPanel, I get the link error Unable to open file GTPDFSEARCHPANEL.OBJ .

So, THE SIXTH PROBLEM is that I cannot use either of those two components in a C++ project without generating a link error.  Often, that error will occur if the .lib file containing those units is not Added to the project.  I can certainly do that manually, if you tell me which .lib library file contains those units.  Note that all five components can be added to a Delphi project without difficulty.

Finally, I installed the XtremePDFConverter.  Again, the installation went without incident.  However, when I drop a TgtPDFConverter component on a C++ form and try to build, I get a Find Static Library dialog, Unable to find static library gtRTFBaseD17.lib  .  That .lib file does not exist on my system, although gtRTFBaseD17.bpl, .dcp, .dpk, .dproj, .rc, and .res do.

So, THE SEVENTH PROBLEM is that I cannot use the XtremePDFConverter in a C++ project.  It works in a Delphi project.

Please advise how I can address these problems, specifically where I can get the missing .lib files for PDFToolkit and XtremePDFConverter (I could probably recompile the packages to create them myself, if that is your recommendation), and how I can get the DevExpress Print Exporter to work (I dont have a clue where to start with that one).

Note that I have my system in a virtual machine, so I can easily switch between system snapshots if there are installations you would like me to test.

Thanks!

Posted in VCL, xe3 | 1 Reply
TClientDataSet, InternalCalc, AutoIncrement, and Key Violations Posted on by nachbar Reply

Embarcaderos C++ Builder/Delphi/RAD Studio product includes a TClientDataSet, which allows the programmer to operate an in-memory data table.  It is part of a larger functionality for data transfer and storage.  For example, you can easily persist the dataset in a binary or XML format and save it in a file or dataset field.  You can also load it with data from a persistent database, allow the user to edit the data, and then post the changes back to the database.  It not only keeps track of the current state of the data, but also which records have been added, deleted, or changed, and within those records, what the field values were, and can use that information to automatically create the SQL that will update the database.  When this was introduced by Borland, they charged $5,000 for a developer to license it, but slowly started liberalizing their licensing as other components (e.g. ADO Recordsets and then ADO.NET) developed the same capability.

However, it has some strange behavior and interactions with other components that can trip you up in strange ways.  How it handles AutoInc fields is one of those.

TClientDataSet (CDS) AutoInc fields can operate in one of two ways, depending (generally) on whether you have used a TDataSetProvider to load data into the CDS.  If you have not, then when a new record is posted to the CDS, a new value for AutoInc, starting at 1, will be put into the AutoInc field.  This happens even if AutoGenerateValue is arNone, and regardless of the AutoInc fields ReadOnly flag.  Even if you put your own value into the AutoInc field, your value will be overwritten with the new, auto incremented value.  If there is a way to stop this behavior, other than loading data from a TDataSetProvider (cds-Data = prov-Data), I havent found it.

However, when you go to ApplyUpdates(), the generated SQL will not have those AutoInc values, which is good, because the persisted database will assign its own AutoInc values.

So, what happens if you first load data from the database, using the TDataSetProvider?  Well, something completely different, which is good, because the AutoInc fields will already have values from the persisted database.  Now, if you create a new record in the CDS with Append(), the AutoInc field will be Null.  That record can be posted to the CDS.  However, when you then Append() a second record and attempt to Post() it, your Post() will fail with a Key Violation exception, because otherwise there would have been two records with the same AutoInc value in the CDS (i.e., Null).

The workaround for this problem (other than using GUIDs rather than AutoInc fields for your primary key, which might be a great choice if it is an option) is to assign a unique value to the AutoInc field in your AfterInsert handler for the CDS.  Something like:

static int AutoIncValue = -1;
DataSet-FieldByName("ID") = AutoIncValue--;

will work (dont forget to turn off the default ReadOnly flag for the AutoInc field in the CDS), and this will generate a series of negative numbers for the AutoInc field in the CDS.  (No need to call Edit(), since the DataSet will already be in State dsInsert when the AfterInsert handler is called. )  That way, if the AutoInc field is not generating its own values (again, generally after you have loaded some records using the TDataSetProvider from the persistent database), AutoInc will get a series of progressively negative values which will not clash with any of  your records loaded from the database.  Just be aware that, if you have NOT loaded any records from your persistent data store, including the case where you loaded a data packet that did not happen to have any records, your AutoInc values will be ignored EVEN IN THE CDS once you call Post() to post the record to the CDS.  Thus, your first record, to which you assigned an AutoInc of -1 in AfterInsert, will become 1 after the Post() call.  (Of course, it will likely become something else in the persistent data store, unless it is the first record there as well).

This strange behavior makes the CDS harder to use, because you cannot use the AutoInc field to link tables in your briefcase, and have to use another field that wont be changed by the CDS underneath you.  Unfortunately, while the InternalCalc field would seem to be ideal for that purpose, it wont work, for two reasons.

The first reason, which makes absolutely no sense to me, is that THE PRESENCE OF AN INTERNAL CALC FIELD IN THE CDS CAUSES THE AUTOINC FIELD TO ASSIGN ITS OWN INCREASING VALUES, EVEN WHEN YOU HAVE LOADED DATA FROM A DATA PACKET, AND EVEN IF YOU HAVE ALREADY ASSIGNED A DIFFERENT VALUE TO THAT AUTOINC FIELD!  That means that, if the data you loaded happens to have an AutoInc value less than the number of records you are adding to the CDS, you will get a Key Violation  when you call Post() on the record that matches.  For example, if you load a record with 1 in the AutoInc field, then Append() a record, assign -1 to AutoInc in your AfterInsert handler, and then call Post(), your -1 gets replaced with 1, and the Post() fails, because otherwise there would have been two two records with 1 in the AutoInc field.  If you load a record with 2 in AutoInc, the first new record in the CDS will get the 1, and the second record will cause the Key Violation.

The second problem with InternalCalc fields when using them in a briefcase is that they do not get included in the Delta DataSet passed to BeforeUpdateRecord, where you could use them to update any linked tables you have in your briefcase.

Thus, my workaround for these problems:

1) Create an AfterInsert handler for the CDS.  Use it to assign a progressively negative number to the AutoInc field.  Get the progressively negative number from a centralized routine, so it wont clash with other progressively negative numbers in other CDSs in your briefcase.  Do NOT use the AutoInc field for anything else, and certainly not for linking tables, because, should you happen to load a Data packet which happens not to have any records, your AutoInc values will be overwritten with positive numbers which (probably) match the AutoInc values of records in your persistent data store which you did not load.

2) Create a second field, called LinkingID, in your CDS.  Make that a fkData so that it will be passed in your DeltaDS to the BeforeUpdateRecord handler and so it does not make AutoInc assign progressively positive numbers, which could clash with the AutoInc of your loaded records, as a fkInternalCalc would.  You will also need LinkingID in the DataSet you are loading the data packet from through the TDataSetProvider, but it should NOT be part of your persistent data store.  Otherwise, you will get a Field LinkingID not found Exception when you try to assign the data packet from the provider.  A fkCalculated field in the source dataset is ideal for this LinkingID field in the source dataset.  Use the OnCalcFields handler of the source dataset to set the value of this field to that of the AutoInc field in the source.  If you are loading from pure SQL, you can include something like ID AS [LinkingID] in your SELECT clause.  Note that this will make the LinkingID field act as ReadOnly in the CDS for records that have been loaded from the data store, even though ReadOnly is false for that field, and even though you can edit LinkingID in the CDS for newly-inserted records.

3) In your CDSs AfterInsert, along with assigning your progressively decreasing negative number to the AutoInc field (where it may be destroyed by Post()), also assign it to both your LinkingID field, where it will NOT be destroyed by Post().  Although you cannot edit LinkingID for records loaded from the data store, you CAN edit it for new records.  Note that you should not call Edit() and Post() in AfterInsert, but can just assign the new value to LinkingID.

4) Now, you can use LinkingID to link data sets in your briefcase.  You can create new records in linked tables, and assign the LinkingID value to foreign keys in those tables.  However, remember that your negative LinkingID values for new records in the CDS will NOT wind up in your persistent data store, so the foreign keys will need to be updated when the data is persisted.

5) You can do that in the BeforeUpdateRecord handler of the TDataSetProvider.  You should ApplyUpdates() for your master table first.  In BeforeUpdateRecord, you will have UpdateKind of ukInsert when you get the inserted record.  You can then get DeltaDS-FieldByName(LinkingID)-AsInteger, which will be the LinkingID, and which will be negative.  The trick is that you have to post the inserted record yourself, using a second DataSet or whatever method you choose, and get the new AutoInc value from the persistent data store, all within the BeforeUpdateRecord call.  Now, save both the negative, temporary LinkingID and the new, permanent AutoInc value returned by the persistent data store.  If you use a single, centralized (within your app) source of those temporary negative AutoIncs, you can use a single table or array to store the corresponding permanent AutoIncs for all of your tables.  Dont forget to set Applied true in BeforeUpdateRecord to tell the provider that you have inserted the new record in the permanent data store.

6) For the detail tables, call ApplyUpdates() after the masters ApplyUpdates().  In their BeforeUpdateRecord, for either ukInsert or ukModify, check the foreign keys for references to your master table.  If the foreign key is negative, that means it points to a temporary LinkingID.  Replace it with the permanent AutoInc from the data store from the table you got back in step 5.  You just look up the negative value and replace it with the corresponding positive value you got back from the data store for that temporary negative LinkingID.  (This is why you cant use the AutoInc field directly instead of the LinkingID if the CDS changes your negative AutoInc values to positive values, and you had used those positive values for your foreign keys, when you are saving the detail table records you wont know if the positive foreign key references the primary key value of your new record or the primary key of some other record in the data store)

Or, you can just use GUIDs to assign your primary keys and forget AutoInc fields altogether!

(BTW, another way in which InternalCalc fields and TClientDataSet dont get along is that, if you have a CDS with an InternalCalc field, you can only call CreateDataSet() once.  If you try to call it again, even after setting the CDS-Active = false, you get a Name not unique in this context exception.  Dont ask me why that error message makes sense.  If there is no InternalCalc field, then no problem calling CreateDataSet() and setting Active false as many times as you want.  As noted on Quality Central, Embarcadero doesnt consider this behavior (or the non-helpful error message) a bug).

Posted in cbxe, Programming, VCL | Leave a reply
Installing Gnostice eDocEngine in C++ Builder with TRichView Posted on by nachbar 1

I have been using the TRichView RTF editor in C++ Builder XE. I needed PDF creation for my project, and had been planning to use the Gnostice eDocEngine product, which has a component for exporting from TRichView. However, the Gnostice eDocEngine installation failed for both the TRichView and THtmlViewer components.

I considered other PDF creation libraries mentioned on the TRichView website. However, the llionsoft product does not work with amy versions of C++ Builder since 2006 (according to their website), and the wPDF license prohibits sending PDF files created by it over the Internet (and one of the important functions of my program is emailing the .pdfs created).  Thus, since I would not be able to email the .pdfs created by wPDF, that license was unacceptible. Gnostice has a much better license.

In reviewing the error message, it appeared that the Gnostice component was not finding the TRichView component, because TRichView was installed into C++ Builder rather than into Delphi.

The solution was to install TRichView into Delphi, so that Gnostice could find it, but so that C++ Builder could also use it. Sergey Tkachenko (the author of TRichView) helpfully provided this info to do that:

Well, there is a way for installing a Delphi package both for Delphi and C++Builder.

How to do it
1) Uninstall all RichView-related C++Bulder packages. Delete all RichView-related obj hpp and dcu files.
2) Open RVPkgDXE.dproj, right click in the Project Manager, choose Options.
In the Options dialog, choose Build configuration (combobox)=Base. On the page Delphi Compiler | Output C/C++, choose C/C++ Output file generation = Generate all C++Builder files. OK to close the dialog. Save the package and install.
Repeat for all RichView packages.
3) In all your projects, change references from RVPkgCBXE to RVPkgDXE and so on.

Differences from the old approach:
HPP files are not placed in the same directory as pas-files, they are placed in $(BDSCOMMONDIR)hpp (such as Documents and SettingsAll UsersRad Studio8.0hpp)
OBJ files are not created. Instead, they are assempled in LIB file placed in $(BDSCOMMONDIR)Dcp (such as RVPkgDXE.lib)

In the final point, once you do this, you will have to add the TRichView .lib files into the project manually, since C++ Builder will no longer do that.  Thats inconvenient, but not a deal-killer.

The Gnostice eDocEngine installation for TRichView only works when TRichView is installed in Delphi. Thus, I had to uninstall and reinstall the entire TRichView stack.

I had to remove all of the .bpl, .hpp, etc. files so they wouldnt be found, and everything installed into C++ Builder was uninstalled using Components / Install

Then, reinstall the entire stack into Delphi, but for each component, be sure to set the Create All C++ Files for each one. That creates the .hpp files, etc. in the /Users/Public/Documents/RadStudio/8.0 (for XE) folders.

Most components will require the previously-installed and used components put into the Requires portion of the project. You will know that is needed because, on installation (or sometimes use) you will get an error that a component cannot be installed because it contains a unit that is also used by another component. When you get that error, it means you have to go back and add the other component (which was compiled first) into the required section of the new, later component.

Ultimately, it was possible to install the TRichView eDocEngine connector by installing into Delphi first, and then into C++ Builder (it didnt work when installing into Delphi and C++ Builder at the same time)

FastReport connectors installed without problem. I could not get the THtmlViewer connector to install using the automatic installation program, but it did install using the same technique install into Delphi, creating the C++ Builder files. The installation program produces a log file (which is named by the installation program when it fails).

The THtmlViewer component installation program failed. The manual installation went as follows:

Build the gtHtmlVwExpD15.dproj project first. It does not install, and the context menu in Delphi does not offer an Install option. Then, build and then install DCLgtHtmlVwExpD15.dproj . Having created the C++ Builder files, the HTMLViewer connector for eDocEngine worked.

Obviously, this will only work if you have RAD Studio rather than just C++ Builder.

The same technique worked for the DevExpress ExpressPrinting component. The automated install failed because it requires (in the literal sense) the Delphi-only version of the DevExpress libraries. However, I was able to get a manual install to work by first loading the gtXPressExpD15.dproj project (from the Source folder of the eDocEngine installation) into Rad Studio. I changed the project to Activate the Release build, and to create all C++ files. However, the build failed because of the requirement for the Delphi-only library. I therefore removed the reference to that library, and added a reference to dxPSCoreRS15.dcp from the DevExpress Library folder. The build then succeeded. Then, I loaded the DCLgtXPressExpD15.dproj project, activated the Release build, changed the project options to create the C++ files (no need to change the requires), and the Build and then Install succeeded, and I was able to use the component in my Delphi and C++ Builder projects.

For the PDFToolkit (starting with 3), the installation went OK, but compiling a program with a TgtPDFViewer component fails with a slew of link errors, starting with Unresolved external GdipCloneMatrix referenced from c: . . . GTPDF32DXE.LIB . Goolging those functions reveal that they are part of the GDI library. The solution was to add the gdiplus.lib library into the project. That file is in the C:Program Files (x86)EmbarcaderoRAD Studio8.0libwin32releasepsdk folder. Right click on the project, select Add. . ., and pick that file to add to the project. Then, it will compile and run.

As of this writing, the PDFToolkit version 4 installation program does not work. It includes the gtPDFViewer.hpp file, which tries to include files such as System.SysUtils.hpp, System.Classes.hpp, and Vcl.Controls.hpp. None of those files exist. Of course, there are files such as SysUtils.hpp, Classes.hpp, Controls.hpp, and Forms.hpp, and includes for those files would work. However, PDFTooklit version 3 does install correctly.

Update Dec 8, 2011:  PDFToolkit version 4 (4.0.1.105) does the same thing when installed in both C++ Builder XE and C++ Builder XE2, becauses the XE2 installation causes an extra include of the XE2 files EVEN IN XE PROJECTS.  The workaround is not to install the XE2 version, only the XE version.  Then, the file compiles, but the link still fails. According to an email from Gnostice:

Please add the following libs into your project before building the Project

(PDF toolkit installation path)PDFtoolkit VCLLibRADXEgtPDFkitDXEProP.lib
(PDF toolkit Installation path)SharedLibRADXEcbcrypt32.lib
(PDF toolkit Installation path)SharedLibRADXEcbgdiplus.lib
(PDF toolkit Installation path)SharedLibRADXEfreetype2.lib
(PDF toolkit Installation path)SharedLibRADXEgtPDF32DXE.lib
(PDF toolkit Installation path)SharedLibRADXEgtusp.lib

With those changes, a project using the PDF Toolkit 4 compiles and links. Hopefully they will come up with a fix for the install problem before my other libraries are ready for use with XE2.

Posted in cbxe, Programming, VCL | 1 Reply
Parallels Desktop VS VMWare Fusion deleting multiple snapshots Posted on by nachbar 4

I have used both Parallels Desktop and VMWare Fusion to run Windows, largely for Windows software development, on a Mac, over the past several years.  There have been a number of reviews comparing them, which I wont repeat.  Parallels Desktop allowed much better hardware configuration and much better keyboard mapping a couple of years ago, which is why I have been using it over the past couple of years (newer versions of VMWare Fusion may have improved that I dont know).

One of the biggest advantage of using Virtual Machines for software development (with either Parallels Desktop or VMWare Fusion) is the ability to easily create snapshots.  When you are doing development, you often need to install or update software components, and never know when you may destabilize your system.  The ability to easily create system backups is similar to commits in source code control, in that you can easily save multiple states that you can go back to if you need to.  As a result, I tend to create a LOT of snapshots.

My biggest complaint about Parallels Desktop in comparison to VMWare Fusion is that it does not allow you to delete multiple snapshots at once.  Deletion of one snapshot (with either Parallels or VMWare) can easily take five to ten minutes or longer.  If you have to delete thirty or forty snapshots, clicking on one, waiting ten minutes, clicking on the next, waiting ten minutes, it can get very time consuming.

VMWare Fusion has had a much better solution for this problem, and is much better than Parallels Desktop in this regard.  In Fusion, you merely select all of the snapshots you want to delete, and then select Delete.  It may take a while, but you dont have to manually select each snapshot, one at a time, delete it, and wait.  It can run overnight, and when you come back, the work is all done!  Alas, Parallels Desktop does not allow you to select more than one snapshot to delete at a time.

Recently, I submitted a request to Parallels about my problem, and to my delight (and surprise), their technician called me on the phone the next day to help me.  Ultimately, they led me to what seems to be a solution, but not without a couple of mis-steps, so I wanted to document my experience for anyone else having this problem.

One thing about snapshots: they are NOT backups.  The virtual disks with either VMWare or Parallels are MUCH more fragile than a regular hard drive controlled with a journaled format by a modern operating system (Windows, Linux, or OS X).  I have had both VMWare and Parallels disk images become corrupted fairly often, which results in ALL of your snapshots (and everything else) being lost.  If you use VMs, you MUST backup the ENTIRE VM FILE, just as with source code control.  That also has the advantage of backing up all of your snapshot history.  Before doing anything I describe here, be sure you have at least one backup (and preferably more, on different disks) of your VM.  As I mention, my VM DID become corrupted, and I had to use a backup.  You have been warned.

I had backed up my entire VM with all of my old snapshots, and I just wanted to remove all of the old snapshots from my current, working copy.  I had a virtual hard disk with about 100 GB of files, but the VM had swollen to about 450 GB with all of the snapshots.  Parallels has a Delete snapshots with children option, which I had used to delete any branches I had (usually when I had to go back to a working configuration).  Thus, I just had one long line of snapshots in my Parallels VM.

The technician directed me to use  the prl_disk_tool merge option.  He directed me to the http://kb.parallels.com/9165 page.  He had me open a terminal session, and copy

prl_disk_tool merge --hdd

onto the command line.  Then, he had me open Parallels, and in the Virtual Machines List, right click on the offending VM and select Show in Finder.  Then, right click on the VM file in Finder and select Show Package Contents.  Then (and I didnt know you could do this), click on the large .hdd file and drag it into the Terminal window that copied the full path of the .hdd file, properly escaped, onto the end of the command line I was building, which left me with a command line like

prl_disk_tool merge --hdd /Users/nachbar/Documents/Parallels/Win 7.pvm/Win 7-0.hdd

Then, press Enter.  I was immediately presented with an operation progress starting at 1%, and slowly increasing to 100% over about 90 minutes.  At the same time, the size of my VM dropped, and the free space on my physical hard drive increased, by about 300 GB.  But that only partially fixed the problem.

I was then able to run my smaller VM with the merged snapshots, but Snapshot Manager showed all of the snapshots still there.  I emailed them back, and they directed me to the page about snapshot manager and deleting snapshots there.  I was able to delete them one at a time, and it was a lot faster (five to ten seconds each).  That page also described Delete with children, and so I thought I would use that, figuring that I would be warned if it was going to delete my current VM state.  Big mistake.  I was almost immediately met with the Unable to Connect to Parallels Service error described at http://kb.parallels.com/8089 .  The Deleting Snapshot animation continued, however, and I allowed it to run, figuring that if I killed it I would certainly corrupt my VM.  However, after 14 hours, I really had no choice but to kill the Parallels application.  I rebooted, and to my surprise, the VM booted (with a message from Parallels that it had recovered from a serious error).  However, all of the snapshots were still there, and now I could not delete any of them, being met with the The configuration file you specified is invalid error.  Thus, still a kind of VM file corruption.

So, I went back to my VM file backup, and started over.  I again ran prl_disk_tool merge, and this time I just deleted the snapshot zombies one at a time, each taking only about 5 seconds (since the underlying snapshots had already been merged).  About half way through, I still had some trouble I got the Unable to Connect to Parallels Service message, and Parallels hung and had to be Force Quit.  Restarting Parallels produced the Unable to Connect to Parallels Service error, so I followed the instructions in http://kb.parallels.com/8089 to restart the Parallels Service.  I also got the Parallels has recovered from a serious error message.  I was then able to proceed with deleting more snapshot zombies until I had removed all of them.  Now, my VM is down from 450 GB to about 117 GB, and the VM seems faster (at least on startup, which used to be quite slow).

So, clearly a kludge, and still involving a Force Quit and Serious Error recovery, but ultimately I got to a much smaller VM.  I would note that I usually shut down my VM before doing these procedures.  Im not sure if it is needed, but it seemed reasonable.

Hopefully, Parallels will catch up to where VMWare has been for many years and allow us to select multiple snapshots to delete at one time.  In the meantime, you may want to try this ON A BACKUP OF YOUR VM if you have a lot of snapshots and a bloated VM.

Posted in Uncategorized | 4 Replies
Compiling THtmlViewer to use in C++ Builder XE Posted on by nachbar Reply

The THtmlViewer component is a component that displays HTML in a Delphi/C++ Builder form.  It is also used by the outstanding TRichView component for importing HTML.  It is available under the MIT license, so it can be used in commercial projects.  It is hosted on Google Code : http://code.google.com/p/thtmlviewer/.  It can be downloaded using subversion as described here: http://code.google.com/p/thtmlviewer/source/checkout, and as noted on that page,

# Non-members may check out a read-only working copy anonymously over HTTP.
svn checkout http://thtmlviewer.googlecode.com/svn/trunk/ thtmlviewer-read-only

The author of TRichView recommends that you NOT use the trunk version, but rather branch 11:

svn checkout http://thtmlviewer.googlecode.com/svn/branches/11/ thtmlviewer-read-only

The 2010 Delphi project imports into Delphi XE and compiles, and you can install the resulting package, but the components only show up when running Delphi!

To create the components for C++ projects, you need to create a C++ package, but NOT using File/New/Package C++ Builder

Instead, Component/Install Component, Install into new package, select the same .pas files used in the Delphi package (basically, all of them in the source folder)
You then need to name the package and choose its save location (for the package folder).  You can give it a description, which will later show up in the Component/Install Packages dialog

Make sure to specify that you want a C++ Package, not a Delphi package.  Select Finish, and your package will be created, although linking will fail with an error.

You have more work to do before it will work properly.  You need to specify the -LUDesignIDE option to the Delphi compiler in the Project Options/Delphi Compiler/Compiling/Other options/Additional options to pass to the compiler, include -LUDesignIDE (without the quotes).  Be sure to use the correct build configuration at the top of the dialog you will want a Release build, so select Base or Release.

Also, Delphi needs to know to make the .hpp files, etc.  In the in the Project Options/Delphi Compiler/Compiling/Output C/C++ / C/C++ Output file generation, pick Generate all C++ Builder files (including package libs) so you get the header files as well as the package lib to install.

Finally, when you try to install the package, you will get an error that it conflicts with a file included in the vclimg150 package.  The solution is to include vclimg150.bpl in the Requires list for the package.  Right-click on Requires, and add vclimg150.bpl (just type the name you dont need to browse to the file, and when it shows up in the requires list, it will be vclimg.bpi, even though you typed vclimg150.bpl)

Now, pick the Release build, and build it (In the Project Manager, open up Build Configurations, right click on Release, and select Build)

Then, you need to install it, using Component/Install Packages .

First, save and close your THTMLViewer C++ Project.  Then, WITH NO PROJECTS OPEN, Select Component/Install Packages…  THTMLViewer should not be listed in the design packages check list box.

Click Add… , and go to the directory where your library was placed (this is set under Project/Options for the project that made the THTMLViewer component by default under Windows 7 and RADStudio XE it is C:UsersPublicDocumentsRAD Studio8.0Bpl .  Select the .bpl package library that you just made, and click OK.

Then, you can create a new C++ Project, and you should be able to select the THtmlViewer component and drop it on the form.  Make a FormCreate handler, containing the line HtmlViewer1-LoadFromString(WideString(Hello)); .  Compile the project, and it may complain about missing .h files.  Just browse to the source directory, where the .hpp files should be.  You can select the .hpp file even though RAD Studio is looking for the .h file.  If it compiles and you see Hello in the window, you know you are done!

Incidentally, creating the component project under Delphi is easier than under C++ Builder Delphi automatically recognizes and fixes the vclimg150 problem, presenting it in a dialog box, with OK to add the reference and rebuild the component.  Also, Delphi automatically installs the component.  However, the component does not install under both C++ Builder and Delphi at the same time (I could not figure out how to do that), and since I dont really need it under Delphi, I did not pursue it.

Posted in cbxe, Programming, VCL | Leave a reply
Configuring AudioCodes MP-112 VoIP Gateway for Fax with Asterisk Posted on by nachbar 6

I am setting up an Asterisk/Elastix system to work with a Cox PRI circuit, and I needed a gateway for managing faxes.  I had previously used the AudioCodes MP-202, which was fairly easy to set up, but that one is no longer available.  Its replacement appears to be the MP-112.  However, the setup is far more complicated, largely because the MP-112 has a lot more capability, but also because the defaults for the MP-112 were not helpful for my application.

This is a very flexible computer and router, capable of mastering DHCP, acting as a firewall, etc.  However, it is set by default to intercept any faxes and route them via T.38.  T.38 only works if the receiving system is expecting T.38 (Asterisk is not capable of taking T.38 and turning it into a regular fax).  If you dont disable the T.38, simple faxes might squeak through, but even full page faxes will get intercepted.  The symptom will be that only part of the fax page goes through.

Configuring the MP-112 requires first configuring it to register as a SIP extension, and then configuring it to send calls to the Asterisk server (as the proxy), and then disabling fax detection so that fax calls go through as regular voice calls.

You connect to the MP-112 via the web interface.  The default IP is 10.1.10.10 .  The default username is Admin, and the default password is Admin .   Note that I (and others) have had trouble with Safari caching a cookie or something and having trouble authenticating.  I had better luck with Firefox or Chrome.  Note that, with Chrome, when I pressed the t key, the whole page cleared!  If I needed a t, I was able to get it by pasting the text in.

For troubleshooting, you can view the MP-112s log from the Status and Diagnostics tab, under the Status and Diagnostics folder within that tab, as Message Log.  Note that, as long as you are on that Message Log tag, the log is running.  I was able to select all and then copy the log to TextMate to review it.

Also under Status and Diagnostics/Gateway Statistics is Registration Status, which tell you whether your SIP phones have registered properly, as well as lots of other diagnostics.

Whenever you want to save the changes you have made on a page, click Submit.  You should do that for every page before leaving the page.  To save the changes permanently into flash (so they are not lost when the device is unplugged), press the Burn button.

There is lots of documentation on these very complex devices on the AudioCodes website.  This is just a quick-start for the common use case (for me) of using the AudioCodes to connect a fax machine or analog phone as a SIP extension on an Asterisk system.

Before you start, go to the Management tab, Software Update/Configuration File from here you can download the configuration file from the MP-112 that describes the factory default.  If you ever want to go back, you can also upload a configuration file.  Once you get it working, after burning your configuration into the flash rom, save a configuration file so you can always go back.  With the hundreds of configuration settings for this device, the configuration file also gives you an easily-visualized view of the changes you have made.

AudioCodes nicely documents the configuration file format.  In the configuration file, there are sections indicated by [section name].  Those sections are only for human consumption, and do NOT have any effect on the function of the configuration file.  Thus, you can change the configuration file, and dont have to worry about the sections.  The exception is the table configuration.  Tables are bounded by [ TABLENAME ] [ TABLENAME ].  The tables have a very specific format, and for tables, the section names DO matter.  Also, if a configuration setting is not mentioned in the config file, generally a documented default is used.

When making changes to a page, be certain to hit the submit button on EACH page.  Dont forget to hit burn when you are done.

First, you may want to go to the Configuration/Network Settings/IP Settings, and set the IP address,netmask, and default gateway.  Alternatively, the MP-112 will read its information from a DHCP server, if one is available, if DHCP is enabled on the device.  If you use DHCP, you will need to figure out the IP address assigned, so you can access the web interface.  If DHCP is available and enabled, it seems to override the setting under IP address.

The DHCP option is under Configuration/Network Settings/Application Settings/DHCP Settings/Enable DHCP, but it is hard to make it stick  Section 10.1.8 of the User Manual indicates that you have to do a reset with the reset button however, that did not work for me.  I had to click Submit, and then Burn.  Then, unplugging and replugging the power cord worked to cause DHCP to be used.

Once you have IP configured, you have to get both ports to register with the SIP server (Asterisk) and route calls.

Regarding the call routing, the MP-112 can use a SIP proxy, a routing table, or both.  Unfortunately, the defaults do not favor either option.

Under Configuration/Protocol Configuration/Proxies-IPGroups-Registration/Proxy Registration/Use Default Proxy, set this to Yes.  This is IsProxyUsed in the config file.  In the Proxy Sets Table (the button is on the same page), put the IP address of your Asterisk server under Proxy Address, and UDP under transport type.  You only need the one proxy, so just fill in the first line.

You need to set the authentication to Per Endpoint, so that each port can register. Set Configuration/Protocol Configuration/Proxies-IPGroups-Registration/Proxy Registration/Authentication Mode to Per Endpoint Then, you need to set Configuration/Protocol Configuration/Proxies-IpGroups-Registration/Proxy Registration/Enable Registration to Enable.

Open the Authentication page:Configuration/Protocol Configuration/Endpoint Settings/Authentication and enter the SIP username and password for each endpoint.  Each endpoint is configured as a separate extension.  In Elastix/FreePBX, the username is typically the extension number, although thats not the most secure way to configure your PBX.

On Configuration/Protocol Configuration/Endpoint Number/EndPoint Phone Number, you can set the extension number for each of your ports.  If you put 1-2 under Channels, the Phone Number is the phone number for the first FXS port, and the phone number for the second is the phone number for the first, plus one.  If you put 1 under channel in the first line, and 2 under channel in the second line, you can assign independent extension numbers for each port.  Press Register to register with the Asterisk server.  You can check whether registration worked under Status Diagnostics/Gateway Statistics/Registration Status, and/or from the CLI interface on Asterisk using SIP SHOW PEERS.

Make sure that the MP-112 shows Registered for your ports before you move on.

Next, you need to set Max Digits in Phone Num (Configuration/Protocol Configuration/Protocol Definition/DTMF and Dialing/Max Digits in Phone Num) to something like 30 (or at least a number no less than the greatest number of digits you want to be able to dial).  If you dont, the default is 3, and every time you enter three digits, the MP-112 will try to connect the call using only those three digits.

Now, if SIP registration is working, you should be able to make phone calls into and out of your MP-112.  Test that before working on the fax setup.  You can use the Status Diagnostics/Status Diagnostics/Message Log to see the SIP messages and additional debugging info, but be certain NOT to leave that as the active tab in your browser.

Now, for the fax handling:

On the configuration/Protocol Configuration/Coders and Profile Definitions/Coders page, I left only G.711A-law and G.711U-law as the codecs being used.  If you want to use different codecs, enter them here, but some compression schemes may not work well transporting faxes.

To avoid the T.38 fax interception described above, you will need to use Fax / Modem Transparent Mode, as described in section 8.2.6.2.7 (page 252) of the MP-112 users manual.  This page shows the parameters you want:

First, set IsFaxUsed = 0 ; This is at Configuration/Protocol Configuration/Protocol Definition/SIP General Parameters/Fax Signaling Method: 0 = No Fax

Then, set FaxTransportMode = 0 ; This is at Configuration/Media Settings/Fax-Modem-CID Settings/Fax Transport Mode: 0 = Disable = transparent mode

Then, set the transport types:

V21ModemTransportType = 0 ; at Configuration/Media Settings/Fax-Modem-CID Settings/V.21 Modem Transport Type: 0 = Disable = transparent mode

Do the same for V22ModemTransportType = 0 (disable) , V23ModemTransportType = 0 (disable) , V32ModemTransportType = 0 (disable), V34ModemTransportType = 0 (disable)

The docs also say to set BellModemTransportType = 0 , but that is apparently NOT in the web interface.  However, 0 = disable is the default for BellModemTransportType.

Note that those parameters are as shown in the configuration file.  They are described in the table in section 10.9, General SIP Parameters. IsFaxUsed is mentioned on page 385. It is ISFAXUSED in the configuration file, and it is under Configuration/Protocol Configuration/Protocol Definition/SIP General Parameters/Fax Signaling Method in the web interface.

At this point, I was able to send and receive faxes with no problem.  If you have trouble, be sure to check out the troubleshooting tips above.

Posted in Uncategorized | 6 Replies
Yealink SIP-T38G Openvpn VPN not functional Posted on by nachbar 6

Update the nice people at Voipsupply.com got me a firmware upgrade (38.0.0.70) which is not posted anywhere on the Yealink website.  Although it does cause the phone to upload the vpn configuration file, it still doesnt work.  Specifically, a Wireshark trace shows absolutely no packets going to the openvpn server.  That is in contrast to the exact same process on the T28 using the same configuration file (I know, but we are testing), which DOES send UDP packets to the openvpn server and correctly set up the vpn and register the phone.

I have sent them the Wireshark traces, config files, and syslogs from the phone.  We will see what they come up with.  But for know, the Openvpn on the T38 is still not functional.

Update see below although openvpn does not work on the Yealinlk SIP-T38G, it DOES work on the Yealink SIP-T28P

I was looking for a secure and simple way to provision an IP phone, and came across the Yealink SIP-T28 phone mentioned in the Elastix Asterisk distribution security documentation. Openvpn is easy to configure, and using Openvpn would allow a simple solution for data encryption (control and RTP), as well as firewall traversal. I saw several posts from individuals who had the SIP-T28 Openvpn working (in spite of poor documentation from Yealink). I have purchased a number of phones from Voipsupply.com, and looking at their website, I saw the SIP-T38G, which looked like an update of the T28, with a color screen as well. The docs for the SIP-T38G on the Yealink website, as well as the data sheet for the SIP-T38G on the Voipsupply.com website, said that the T38G had the Openvpn functionality as well, so I ordered a SIP-T38G from Voipsupply.

The other posts helped with the construction of the openvpn configuration file (they said that the hardest part was finding a tool to create a tar file using . as the root, but actually the standard tar utility did that very easily, using something like tar cvf ../client.tar ., putting client.tar in the parent directory to avoid a warning from tar.) However, when I went to upload client.tar to the phone, there was no option to do so!

Below, you can see that the Openvpn functionality is advertised on the box that the phone came in. However, all of the vpn configuration sections are missing from both the web configuration page and the on-phone configuration menus. I have included the figure from the SIP-T38G manual which I downloaded from the Yealink website, as well as a screen capture showing that the configuration options are missing.

I wanted to contact someone from Yealink about this, but there is no usable contact information (other than a call to China, which I dont consider a viable option). There is a Yealink UK website with a support forum, so I tried to register. I got an immediate email that my registration would be reviewed by their administrator, and would be inactive until it was. Five days later, I have not heard any more from them. I also tried sending an email to Voipsupply support asking them why the phone they sent did not have the capabilities advertised for it in the data sheet on their website (as well as on the box the phone came in), but five days later, I have heard nothing.

Since I bought this phone entirely for the Openvpn capability, what I now have is a very expensive paperweight (albeit one with a beautiful color screen). I posted this to save others the trouble. I have also ordered a T28, which others say they have made work with Openvpn.

Follow-up: I got my Yealink T28 completely different story there. The VPN entry was on the advanced network configuration screen, right where it was supposed to be. Uploading the client.tar file was a snap (albeit because I had already constructed the client.tar for the T38G!), and configuration was quick and easy. After uploading client.tar and enabling the VPN, I just went to the Account tab on the web interface, entered the extension number under Label, Display Name, Register Name, and User Name, entered the SIP password under Password, and entered the Asterisk machines internal tun interfaces IP address under SIP Server, clicked Confirm, and I could make phone calls over the VPN!

If you have trouble, confirm that you have the tun interfaces IP address for the SIP Server (not the Asterisk machines external IP), that openvpn is started on the server, and that you can ping the Asterisk machines internal tun interface IP from the Asterisk machine. You should see the vpn being set up in the /var/log/messages log, along with the IP assigned to the phones end of the vpn, and you should be able to ping the tun interface in the phone from the Asterisk machine. And dont forget to check that the firewall is not getting in the way.

Also, although I did not do this, if you run the openvpn server process on a machine other than the Asterisk machine, you will have to make sure you have the routing entries to get the packets to your Asterisk server. In that case, I would start by making sure that the vpn is set up correctly, and then work on the routing.

Posted in Uncategorized | 6 Replies
DevArt.com UniDAC in C++ Builder 2010 to access SQL Server Compact Edition Posted on by nachbar Reply

DevArt makes a number of data access products for Delphi/C++ Builder as well as .NET . I downloaded a trial of the UniDAC Universal Data Access Components for VCL. Unfortunately, the documentation is sparse, to say the least, and C++ builder choked on compiling even a very simple application. Here are a few notes on getting this working to access SQL Server Compact Edition (SQL CE).

Also unfortunately, Microsoft seems to have left a glaring (and even actually hard to believe) defect in its product line by not including any ability to transfer data to or from SQL Server (or any other database) and SQL Server Compact Edition. Thus, I wrote a small utility to transfer my data into SQL CE.

The only code I could find on DevArts website for accessing SQL CE was for Delphi rather than for C++ Builder. However, the following works to access SQL CE and read a list of tables:


UniConnection1-SpecificOptions-Values["OLEDBProvider"] = "prCompact";
UniConnection1-Database = "C:\work\VS2010Tests\CreatedDB01.sdf";
UniConnection1-Connect();
TStrings* list = new TStringList();
UniConnection1-GetTableNames(list, true);
ListBox1-Items-Assign(list);

You add a TUniConnection to the form, and then you must set ProviderName in the TUniConnection to SQL Server in the property combo box to avoid the EDatabaseError Provider is not defined.

However, C++ Builder will still fail to link the project with the error [ILINK32 Error] Fatal: Unable to open file SQLSERVERUNIPROVIDER.OBJ Apparently the fix for that is to manually edit your .cbproj project file (!), find the AllPackageLibs element, and add msprovider140.lib

Now, your project will compile and fill the listbox with the list of tables!

Posted in cb2010, Programming, SQL Server Compact Edition, VCL | Leave a reply
Using external SATA / eSATA hard drive with Scientific Atlanta EXPLORER 8300 and 8300HD DVRs from COX Posted on by nachbar 72

The hard drive that worked was the Toshiba PH3100U-1EXB 1 terabyte external hard drive (available right now at Frys Electronics for $99!) The one that did not work was a two terabyte dual drive raid external hard drive of a different brand.

A friend has a Scientific Atlanta EXPLORER 8300 DVR rented from COX Communications in Arizona. It has a fairly small amount of storage (about 74 GB, according to the info see below), and has an external connector labeled SATA. I Googled it, and it looks like some people have had success adding an external hard drive. However, details as to which drives worked and didnt work are hard to come by. I couldnt find anything on Scientific Atlantas website (apparently now part of Cisco), and when I called COX, the tech said to call Scientific Atlanta myself, since COX doesnt support adding an external hard drive and didnt know how to do it (an uncharacteristically poor customer support experience for COX, which has usually had excellent customer support in my experience).

I went to Frys electronics, and they said I needed some sort of media extender, which they didnt have. They said that using an eSATA drive was hit and miss. So I missed and then I hit. I wanted to report my experience to save others the trouble.

I am not customer support for either COX or Scientific Atlanta/CISCO. I dont know if this will work for others, and I dont know what other combinations might work. I take absolutely no responsibility for any damage you may do by repeating my experience. There is at least some chance that you will lose any recordings you have, so if that is a concern, be sure to watch and/or copy them elsewhere (although I lost no recordings). However, if you find other combinations that work, with this or other DVRs, please post your findings in the comments below.

First, and most important, check to see whether you have an eSATA or a SATA connector. In my case, it was an eSATA connector labeled SATA. Look online for pictures of the difference. The SATA connectors have an elbow inside that the eSATA does not have. You can use a digital camera to take a detailed close-up if needed.

Second, you will need an external hard drive with an eSATA connector (most only have USB or perhaps Firewire connectors). You also need to purchase the cable to connect the eSATA connector on your drive to the eSATA connector on the 8300. Different eSATA cables have different connectors, so be sure to get the right one.

To connect the drive, first turn the 8300 off, then unplug it for at least 15 seconds. Be sure both the drive and the 8300 are unplugged, and connect the eSATA cable between them. Leave the drive unplugged, but plug the 8300 back in. It will go through its boot sequence, then the panel on the front of the 8300 will go black. You then have to wait for it to get a signal from the cable company, and the time will then come on the screen. That may take five or ten minutes.

Once the 8300 has the time showing, turn on the 8300 and the TV so you are watching TV through the 8300. Now, plug in the drive, and if needed, turn on the switch so the light on the drive comes on. Now comes the moment of truth.

The first time I did this, I got a message on the TV that the drive or cable were working improperly. I had to press a button on the remote to dismiss the message, and was informed that the external storage will not be working. You should check the cable, but in my case, after several tries, I always got the same message. Unplug everything and disconnect your drive. Fortunately, Frys Electronics has a great return policy!

The second time I did this, success! When I turned on the drive, I got a message that the drive would have to be formatted, and I would lose all information on the drive, including any saved recordings. I pressed the button to proceed. There was no indication that anything was happening, and the 8300 continued to work as normal. I left it running for about 40 minutes, to give plenty of time for formatting, although I dont really know how long formatting takes. After 40 minutes, still no indication of progress, and the 8300 indicated that its storage was as full as it has ever been.

So after 40 minutes, I unplugged the 8300 again for 15 seconds, and plugged it back in to reboot the 8300. After it came back up, I turned it on with the remote. I left for a while, and when I came back the 8300 would only play PBS, and attempts to change the channel or bring up the list of saved programs did not work. I wasnt sure what was going on, so after a little while I unplugged the 8300 again, disconnected the external drive, and plugged the 8300 back in again. I then saw a message about advanced functions temporarily not working even with the external drive disconnected, and again I could only watch PBS. Therefore, I unplugged the 8300 again, again connected the eSATA cable to the external drive, plugged in the external drive, and plugged in the 8300, and just left it alone for a while. About an hour later, I went back, turned on the 8300 with the remote, and the 8300 was working normally, except that the available space had increased dramatically (95% full went to 6% full)! All of the saved programs were still there, but there was a lot more empty space.

Next time, once the external drive was formatted, I would unplug the 8300 for 15 seconds, leave the external drive plugged in, and then plug the 8300 back in, and leave it alone for an hour.

Update for the 8300HD: I tried the same thing for a 8300HD unit. I unplugged the 8300HD for 15 seconds, connected the external drive (a different specimen of the same model), plugged in only the 8300HD (not the drive) until the time showed on the 8300HD, used the remote to turn on the 8300HD so I was viewing the 8300HDs signal through the TV, then plugged in the drive and pushed the on button on the drive. The light on the drive came on, and I got the dialog asking me whether to Format this external storage device to work with this DVR?. I pressed the A key for Yes, Format. The dialog disappeared, and there was little or no indication that anything was happening. After about five minutes of nothing, including no flashing of the light on the drive, I again unplugged the 8300HD for 15 seconds, and plugged it back in. After the time showed on the 8300HD, I turned on the 8300HD with the remote, and got a message box that said: The external storage device connected works with this DVR. NOTE: To safely unplug this device, first unplug power from the DVR, then wait 10 seconds before disconnecting. After about 10 seconds, that message box disappeared, and the DVR worked normally, but had a lot more empty space! So, on the 8300HD, the whole process only took about 15 minutes.

Update for the Explorer 8240HD The original 8300 was replaced with an 8240HD, for high definition. The hard drive that was used with the 8300 originally was disconnected after unplugging the 8300 for 15 seconds. It was moved to the new 8240HD, and hooked up with the same result as with the 8300HD above. Note that, since the hard drive does not come on until its button is pushed, if there is a power failure, someone will have to push the on button for the hard drive for it to work again. I tested to see what happens if the button is not pushed until after the 8240HD turned on, and the only notable effect was that there was less space available on the 8240HD. When the drive was turned on, I again got the message the the external storage works with this DVR. The 8240HD has a 148 GB disk installed, and I added 931 GB (according to the info page) with the new disk.

The old 8300 appears to be confused about how much space is available now, showing a screen indicating that there is still as much space as there was before. Many of the recorded shows are still in the list, but as expected, trying to play them does not work, but rather the DVR goes right to the Press the list button to see your recordings screen.

Reviewing postings on the Internet, I would be careful to be sure that the 8300 is unplugged prior to allowing the external drive to lose power or be disconnected, for fear of data corruption and/or loss of all your recordings. Also, the recordings on the drive are reportedly encrypted with a key that is specific to the serial number of your 8300, so if the drive is moved to another box, you will not be able to view the recordings there.

To see info about your 8300, including info about the attached internal and external drives and what interfaces have been enabled by COX, use the keys on the front of the 8300. Press and hold the SELECT button until the little flashing mail icon comes on, then release the SELECT button and press the INFO button. Page forward and back with the Volume + and Volume buttons. Press the EXIT button when you are done.

If you have success with other DVRs and/or drives or have other experiences, please let me know in the comments!

Posted in Uncategorized | Tagged DVR, eSATA, Explorer 8300, external drive, PH3100U-1EXB, SATA, Scientific Atlanta, Toshiba | 72 Replies
Using IMallocSpy to check for BSTR memory leaks in Embarcadero C++ Builder CB2009 Posted on by nachbar Reply

IMallocSpy is a COM interface provided to check for memory leaks using the IMalloc interface, as used in allocating BSTRs. I was recently trying to figure out which CB2009 objects would release the returned BSTR, so I looked for a way to test for memory leaks. I found this reference for using IMallocSpy using the Microsoft tools http://comcorba.tripod.com/comleaks.htm , but could not find it adapted for CB2009. Here is my adaptation for CB2009.

The answer to my original question:


OleVariant var = node-GetAttribute("xmlns"); // this DOES NOT cause a memory leak = OleVariant takes control of the BSTR, and then frees it
UnicodeString str = (wchar_t*)node-GetAttribute("xmlns"); // this DOES cause a memory leak
WideString ws = node-GetAttribute("xmlns"); // this DOES NOT cause a memory leak
WideString nws = node-GetAttribute("nonesuch"); // this returns a NULL, which throws an exception

If the attribute might not exist and you dont want to throw an exception, you can use:


String str;
OleVariant val = subNode-Attributes[desiredSubAttribute];
if (!val.IsNull())
str = val;

Here is the code to implement the IMallocSpy tester. It allocates an array to keep track of IMalloc allocations and deallocations, and then dumps its output with OutputDebugString when requested. For details, see the link above. Again, the code below is ported from the example code at comcorba.tripod.com by Jason Pritchard.

As noted, do NOT include this code in software sent to a customer. It is for testing only.


// Need to call SetOaNoCache to turn off the BSTR Cache, or else we will ALWAYS report leaks

typedef void WINAPI (*SETOANOCACHE)();

HINSTANCE hDLL = LoadLibrary(Loleaut32.dll);
if (!hDLL)
throw Exception(Unable to load oleaut32.dll);

SETOANOCACHE SetOaNoCachePtr = (SETOANOCACHE) GetProcAddress(hDLL, SetOaNoCache);
if (!SetOaNoCachePtr) {
throw Exception(Unable to get SetOaNoCache);
}
SetOaNoCachePtr();

// Initialize COM.
::CoInitialize(NULL);

// Initialize the COM memory checker
CMallocSpy* pMallocSpy = new CMallocSpy;
pMallocSpy-AddRef();
::CoRegisterMallocSpy(pMallocSpy);

pMallocSpy-Clear();

// pMallocSpy-SetBreakAlloc(4); // enable this if you want the debugger to break at COM allocation 4

test_com_allocs(); // run your test allocations and deallocations e.g. the test code above

// Dump COM memory leaks
pMallocSpy-Dump();

// Unregister the malloc spy
::CoRevokeMallocSpy();
pMallocSpy-Release();
::CoUninitialize();

The COM memory checker object:


// IMallocSpyUnit.h
class CMallocSpy : public IMallocSpy
{
public:
CMallocSpy(void);
~CMallocSpy(void);

// IUnknown methods
virtual HRESULT STDMETHODCALLTYPE QueryInterface(
/* [in] */ REFIID riid,
/* [iid_is][out] */ __RPC__deref_out void __RPC_FAR *__RPC_FAR *ppvObject);

virtual ULONG STDMETHODCALLTYPE AddRef( void);

virtual ULONG STDMETHODCALLTYPE Release( void);

// IMallocSpy methods
virtual SIZE_T STDMETHODCALLTYPE PreAlloc(
/* [in] */ SIZE_T cbRequest);

virtual void *STDMETHODCALLTYPE PostAlloc(
/* [in] */ void *pActual);

virtual void *STDMETHODCALLTYPE PreFree(
/* [in] */ void *pRequest,
/* [in] */ BOOL fSpyed);

virtual void STDMETHODCALLTYPE PostFree(
/* [in] */ BOOL fSpyed);

virtual SIZE_T STDMETHODCALLTYPE PreRealloc(
/* [in] */ void *pRequest,
/* [in] */ SIZE_T cbRequest,
/* [out] */ void **ppNewRequest,
/* [in] */ BOOL fSpyed);

virtual void *STDMETHODCALLTYPE PostRealloc(
/* [in] */ void *pActual,
/* [in] */ BOOL fSpyed);

virtual void *STDMETHODCALLTYPE PreGetSize(
/* [in] */ void *pRequest,
/* [in] */ BOOL fSpyed);

virtual SIZE_T STDMETHODCALLTYPE PostGetSize(
/* [in] */ SIZE_T cbActual,
/* [in] */ BOOL fSpyed);

virtual void *STDMETHODCALLTYPE PreDidAlloc(
/* [in] */ void *pRequest,
/* [in] */ BOOL fSpyed);

virtual int STDMETHODCALLTYPE PostDidAlloc(
/* [in] */ void *pRequest,
/* [in] */ BOOL fSpyed,
/* [in] */ int fActual);

virtual void STDMETHODCALLTYPE PreHeapMinimize( void);

virtual void STDMETHODCALLTYPE PostHeapMinimize( void);

// Utilities

void Clear();
void Dump();
void SetBreakAlloc(int allocNum);

protected:
enum
{
HEADERSIZE = 4,
MAX_ALLOCATIONS = 100000 // cannot handle more than max
};

ULONG m_cRef;
ULONG m_cbRequest;
int m_counter;
int m_breakAlloc;
char *m_map;
size_t m_mapSize;
};

// IMallocSpyUnit.cpp
#include // for IUnknown, IMallocSpy, etc.
#include IMallocSpyUnit.h

#pragma package(smart_init)

// Constructor/Destructor

CMallocSpy::CMallocSpy(void)
{
m_cRef = 0;
m_counter = 0;
m_mapSize = MAX_ALLOCATIONS;
m_map = new char[m_mapSize];
memset(m_map, 0, m_mapSize);
}

CMallocSpy::~CMallocSpy(void)
{
delete [] m_map;
}

// IUnknown support

HRESULT STDMETHODCALLTYPE CMallocSpy::QueryInterface(
/* [in] */ REFIID riid,
/* [iid_is][out] */ __RPC__deref_out void __RPC_FAR *__RPC_FAR *ppUnk)
{
HRESULT hr = S_OK;
if (IsEqualIID(riid, IID_IUnknown))
{
*ppUnk = (IUnknown *) this;
}
else if (IsEqualIID(riid, IID_IMallocSpy))
{
*ppUnk = (IMalloc *) this;
}
else
{
*ppUnk = NULL;
hr = E_NOINTERFACE;
}

AddRef();
return hr;
}

ULONG STDMETHODCALLTYPE CMallocSpy::AddRef( void)
{
return ++m_cRef;
}

ULONG STDMETHODCALLTYPE CMallocSpy::Release(void)
{
ULONG cRef;
cRef = m_cRef;
if (cRef == 0)
{
delete this;
}

return cRef;
}

// Utilities
void CMallocSpy::SetBreakAlloc(int allocNum)
{
m_breakAlloc = allocNum;
}

void CMallocSpy::Clear()
{
memset(m_map, 0, m_mapSize);
}

void CMallocSpy::Dump()
{
char buff[256];
::OutputDebugString(CMallocSpy dump -n);
for (int i=0; i {
if (m_map[i] != 0)
{
sprintf(buff, IMalloc memory leak at [%d]n, i);
::OutputDebugString(buff);
}
}
::OutputDebugString(CMallocSpy dump complete.n);
}

// IMallocSpy methods
SIZE_T STDMETHODCALLTYPE CMallocSpy::PreAlloc(
/* [in] */ SIZE_T cbRequest)
{
m_cbRequest = cbRequest;
return cbRequest + HEADERSIZE;
}

void *STDMETHODCALLTYPE CMallocSpy::PostAlloc(
/* [in] */ void *pActual)
{
m_counter++;
if (m_breakAlloc == m_counter)
::DebugBreak();
// Store the allocation counter and note that this allocation is active in the map.
memcpy(pActual, m_counter, 4);
m_map[m_counter] = 1;
return (void*)((BYTE*)pActual + HEADERSIZE);
}

void *STDMETHODCALLTYPE CMallocSpy::PreFree(
/* [in] */ void *pRequest,
/* [in] */ BOOL fSpyed)
{
if (pRequest == NULL)
return NULL;

if (fSpyed)
{
// Mark the allocation as inactive in the map.
int counter;
pRequest = (void*)(((BYTE*)pRequest) HEADERSIZE);
memcpy(counter, pRequest, 4);
m_map[counter] = 0;
return pRequest;
}
else
return pRequest;
}

void STDMETHODCALLTYPE CMallocSpy::PostFree(
/* [in] */ BOOL fSpyed)
{
return;
}

SIZE_T STDMETHODCALLTYPE CMallocSpy::PreRealloc(
/* [in] */ void *pRequest,
/* [in] */ SIZE_T cbRequest,
/* [out] */ void **ppNewRequest,
/* [in] */ BOOL fSpyed)
{
if (fSpyed pRequest != NULL)
{
// Mark the allocation as inactive in the map since IMalloc::Realloc()
// frees the originally allocated block.
int counter;
BYTE* actual = (BYTE*)pRequest HEADERSIZE;
memcpy(counter, actual, 4);
m_map[counter] = 0;
*ppNewRequest = (void*)(((BYTE*)pRequest) HEADERSIZE);
return cbRequest + HEADERSIZE;
}
else
{
*ppNewRequest = pRequest;
return cbRequest;
}
}

void *STDMETHODCALLTYPE CMallocSpy::PostRealloc(
/* [in] */ void *pActual,
/* [in] */ BOOL fSpyed)
{
if (fSpyed)
{
m_counter++;
if (m_breakAlloc == m_counter)
::DebugBreak();

// Store the allocation counter and note that this allocation
// is active in the map.
memcpy(pActual, m_counter, 4);
m_map[m_counter] = 1;
return (void*)((BYTE*)pActual + HEADERSIZE);
}
else
return pActual;

}

void *STDMETHODCALLTYPE CMallocSpy::PreGetSize(
/* [in] */ void *pRequest,
/* [in] */ BOOL fSpyed)
{
if (fSpyed)
return (void *) (((BYTE *) pRequest) HEADERSIZE);
else
return pRequest;
}

SIZE_T STDMETHODCALLTYPE CMallocSpy::PostGetSize(
/* [in] */ SIZE_T cbActual,
/* [in] */ BOOL fSpyed)
{
if (fSpyed)
return cbActual HEADERSIZE;
else
return cbActual;
}

void *STDMETHODCALLTYPE CMallocSpy::PreDidAlloc(
/* [in] */ void *pRequest,
/* [in] */ BOOL fSpyed)
{
if (fSpyed)
return (void *) (((BYTE *) pRequest) HEADERSIZE);
else
return pRequest;
}

int STDMETHODCALLTYPE CMallocSpy::PostDidAlloc(
/* [in] */ void *pRequest,
/* [in] */ BOOL fSpyed,
/* [in] */ int fActual)
{
return fActual;
}

void STDMETHODCALLTYPE CMallocSpy::PreHeapMinimize( void)
{
return;
}

void STDMETHODCALLTYPE CMallocSpy::PostHeapMinimize( void)
{
return;
}

Posted in cb2009 | Leave a reply
Embarcadero C++ Builder Sending UnicodeString via COM truncates string to half Posted on by nachbar Reply

CB2009 defines a BSTR as a wchar_t*, which is the type returned by c_str() when UNICODE is set. Therefore, if you call a COM function which expects a BSTR with UnicodeString.c_str(), the compiler returns the wchar_t* which the function expects. Sounds good! Except it doesnt work!!

Actually, both BSTR and the character array in UnicodeString prefix the array of wide chars with a four-byte integer that gives the length. However, BSTR expects this to be the length IN BYTES, whereas UnicodeString makes this the length IN CHARACTERS. Thus, the server gets the wchar_t*, looks right before the pointer for an int, and only uses that many bytes. So, if your UnicodeString is seven chars long, and thus 14 bytes long, CB2009 sets the length to seven, and COM only accepts the first seven bytes (and thus the first four chars). So, all your strings are cut in half!

To make things worse, UnicodeString.Length() does not count out the length to the null terminating char, but rather just returns the integer. So, if you fix the integer to be 14 as COM expects, UnicodeString.Length now returns 14 for your seven character string! We dare not mess with UnicodeStrings data.

The solution is to use WideString instead. Instead of:

UnicodeString myString = LMy Data;
ptr-ComFunctionExpectingBSTR(myString.c_str()); // this compiles, but COM only uses the first half of the string

use

UnicodeString myString = LMy Data;
ptr-ComFunctionExpectingBSTR(WideString(myString).c_bstr()); // this has the correct length

References:
http://msdn.microsoft.com/en-us/library/ms221069.aspx Microsofts reference for the BSTR type in MSDN/Win32 and COM Development/Component Development/COM/Automation Programming Reference/Data Types, Structures and Enumerations/IDispatch Data Types and Structures

http://docwiki.embarcadero.com/RADStudio/en/Unicode_in_RAD_Studio#New_String_Type:_UnicodeString Unicode in RAD Studio

NOTE: In my testing, this is a problem sending data to Microsoft Outlook 2007. It did not appear to cause a problem when sending to Crystal Reports, so the issue might well be with the server code rather than the COM subsystem, depending on how the server determines the length of the passed string. I have seen example code that just passes a WideString to COM, but for me, the compiler gives a Type mismatch error unless I pass the pointer returned by c_bstr()

To see how the length of the string is set:

wchar_t* lit = L"My Sttring";
int* litIPtr = (int*) lit;
litIPtr--;
int litLen = *litIPtr;

WideString ws = lit;
wchar_t* wsPtr = ws.c_bstr();
int* wsIPtr = (int*) wsPtr;
wsIPtr--;
int wsLen = *wsIPtr;

UnicodeString us = lit;
wchar_t* usPtr = us.c_str();
int* usIPtr = (int*) usPtr;
usIPtr--;
int usLen = *usIPtr;

// int actualLen = StrLen(lit); // if UNICODE is set
int actualLen = wcslen(lit);

ShowMessage("For a " + String(actualLen) + " character string, Literal gives " +
String(litLen) + ", WideString gives " +
String(wsLen) + ", UnicodeString gives " + String(usLen));

Posted in Uncategorized | Leave a reply
Windows Communication Foundation 3.5 Unleashed Errors Posted on by nachbar Reply

I am reading this book; it is very well written, and the people who wrote it are obviously very smart. Unfortunately, it is chock full of technical errors. I have wasted hours trying to get the examples to work. In the hope of saving others some time, I am listing a few of the problems I found.

In addition to these, there are lots of other errors I found in my first, casual, reading, especially in the code.

In the Fundamentals chapter (2)

Page 45, step 1: add the app.config file to the Host project, NOT the DerivativesCalculatorService project (see page 53)
error: Service DerivativesCalculator.DerivativesCalculatorServiceType has zero application (non-infrastructure) endpoints. This might be because no configuration file was found for your application, or because no service element matching the service name could be found in the configuration file, or because no endpoints were defined in the service element.

Page 56, step 4: there should not be a colon before svcutil (and there is not in the screenshot below)

Page 58: The class should be DerivativesCalculatorClient rather than DerivativesCalculatorProxy (since that is the class created by the tool see listing 2.6)

Page 67 72: Getting the service to run under IIS 7.0 required quite a number of additional steps, at least in my configuration.

1) Page 69, Step 3 Must use Add Application rather than Add Virtual Directory, at least in IIS 7.0 . I wasted quite a bit of time on this one, until I found http://social.msdn.microsoft.com/forums/en-US/wcf/thread/49d9279f-2bc1-482b-8bb0-da1261736acb/ , where this exact problem with this exact same example was noted in April 2006.
error: The type DerivativesCalculatorService.Calculator, provided as the Service attribute value in the ServiceHost directive could not be found.

2) Need to give the IIS_IUSRS permission to the DerivitivesCalculatorService directory, so IIS can use the config file

3) Need to give the Anonamous Login user permission to the DeriviativesCalculatorService directory, so IIS will serve it to the user

4) I had to fix the bindings to the .svc extension using
c:windowsMicrosoft.NETFrameworkv3.0Windows Communication FoundationServiceModelReg -r
It is possible that I had that problem because I enabled IIS after I had already installed VisualStudio. Or not.

5) I had to turn on the WCF Activation with Control Panel/Programs/Turn Windows Features On/Microsoft .NET 3.0/WCF Activation for http and non-http see http://michael-arnett.spaces.live.com/blog/cns!5AA848FF3F707F99!1093.entry?
error: HTTP 500 Handler svc-Integrated has a bad module “ManagedPipelineHandler” in its module list

Page 75 in my testing, the MSFT string was NOT encrypted when I used the netTcpBinding

Page 79 to use the WcfTestClient with this demo, run:
WcfTestClient http://localhost:8000/Derivatives/
or
WcfTestClient http://localhost:8000/Derivatives/?wsdl

Posted in Uncategorized | Leave a reply
Neopost IJ25 Postage Meter Pricing Scam Posted on by nachbar 30

I just got off the phone with Neopost, whose postage meter I have been using for a number of years. I guess I never put together the true cost of using it. The reason for my call was that my postage meter suddenly stopped working, saying Warning, Ink Expired.

It turns out that the Neopost IJ25 postage meter is programmed to stop working if the same ink cartridge has been installed for more than one year (as in 365 days). When I called, I was told that we should have received a single warning about a month ago that the ink would expire soon. However, if we had pushed Ok that one time, the warning would disappear, and the next warning would be when the postage meter stopped working altogether. I dont know if we got the warning or not, although nobody remembers seeing it.

Nonetheless, my Neopost IJ25 postage meter is dead in the water. The cost for a replacement ink cartridge is $124: $86 for the cartridge (!!), $29 for overnight shipping handling (!!), and $9.32 for tax. 2-3 day shipping would have been $23, 10 day shipping $12. Clearly, Neopost has a strong incentive to set up their IJ25 postage meter so we would not see the warning, and is not shy about overcharging for shipping.

In addition, Neopost gouges us to update the meter for postal rate changes (so it prints 43 cents instead of 42 cents) this was $91.89 on 12/22/08, plus a special deal $67.02 for one year of subsequent updates. In the notice, Neopost gleefully noted that the postal service has agreed to start increasing the postage more often probably twice a year. The thing is, to get the special deal, I have agreed to automatic, non-cancelable upgrades; to cancel, I must give at least 30 days, but no more than 60 days, notice. Again, clearly designed to make it more difficult to cancel.

The only thing that is reasonably priced is the actual rental of the Neopost IJ25 postage meter and scale, at $219.57 for 12 months.

However, the actual cost of using the Neopost IJ25 meter for 12 months is: $219 rental, about $90 reprogramming, and $124 for the timed ink cartridge, or $433, or about twice the quoted rental cost. Not to mention the inconvenience of having a postage meter that dies unexpectedly and with no (practical) notice.

So why is that a scam? For the same reason that it is a scam when an airline quotes an airfare that doesnt include baggage. Neopost should be honest about the actual cost of renting their equipment, and not gouge their customers for the rate reprogramming and ink cartridges. There is no excuse for charging $29 for overnight shipment of a 4 ounce cartridge. And, worst of all, they should not design the Neopost IJ25 postage meter to fail unexpectedly and without (practical) warning, significantly inconveniencing their customer.

You can bet that between now and next December, I will be looking into other options that dont include a meter programmed to fail.

If anyone from Neopost cares to dispute or comment on this post, I will be happy to talk with them. If anything is incorrect, I will certainly correct it.

Posted in Uncategorized | 30 Replies
Dreamweaver Secure FTP (SFTP) Configuration Posted on by nachbar 5

The Dreamweaver configuration documentation for Secure FTP is poor to non-existant. I am not talking about how to check Dreamweavers Use Secure FTP (SFTP) checkbox (which is trivial, but for which there is LOTS of documentation), but rather how to set up your secure server to receive the Secure FTP communication from Dreamweaver.

The trick is that Dreamweavers Secure FTP is not FTP! It is ssh! It runs on port 22, which is not configurable (within Dreamweaver, as of CS3).

So, for all of us who have wasted hours setting up an ftp server, and trying to figure out why Dreamweaver wouldnt connect through the firewall, now you know!

You just have to set up sshd to receive connections on port 22, and dont waste your time with vsftpd!

When configured properly, the (Ubuntu) /var/log/auth.log will report the connection via sshd, and sshd will report subsystem request for sftp. sftp is an ftp-like program that runs under and through ssh. You can read more about the sftp client via Linux man sftp. You can use the sftp client to connect between Linux computers to transfer files over ssh like you would use ftp, but with no need to set up an ftp server (again, sftp uses the sshd server).

Posted in Dreamweaver | 5 Replies
Ubuntu Server Configuration Posted on by nachbar Reply

A few random notes.

To start the iptables rules for the firewall on startup, first, create the firewall script, adding the iptables rules one by one, and save the rules with:


iptables-save > /etc/default/iptables

Then, to load the rules automatically with the new Upstart init system, I just create a new file, /etc/event.d/iptables :


# Script to start firewall
# Save rules with iptables-save > /etc/default/iptables

start on runlevel 1
start on runlevel 2
start on runlevel 3
start on runlevel 4
start on runlevel 5

exec /sbin/iptables-restore < /etc/default/iptables

That way, you aren’t changing any existing files, just adding the new one. On every reboot, the iptables rules get loaded. You can check that they are loaded with:


sudo /sbin/iptables-save | less

Posted in Ubuntu | Leave a reply
iMovie 09 Error Audio Exporting Problem Posted on by nachbar 45

Apple recently updated its iLife suite (including iMovie) from 08 to 09.  I havent done much with movie creation, but wanted to make a short video demonstration of my 3 dimensional breast augmentation simulator.  We shot some video at the office.  I easily imported the tape into iMovie 08, but when I went to edit it, I noticed that iMovie 09, which had just been released, had two features I wanted: precision trimming, and the ability to slow down a clip.  So, I ordered the upgrade.  It came quickly, and installed easily. About reports that it is version 8.0 (717). Check for Updates reports that it is the current version.

I edited my video until I was happy with it.  iMovie crashed a couple of times, but did not lose any of my work.  I went to export the video into a format I could post.  Thats when my problems began!

On many of the clips, I had set the audio to zero, so I could use voiceover to explain them.  I also adjusted the speed so that I would have more time for my voiceover than the length of the clip.  In a  couple of places I added some simple transitions.

However, on the exported video, in one of the clips I could still hear the audio that had been recorded with the video, even though the volume was set to zero!  (Now, of course, the audio was very slow and sounded like a growling monster).  And on one of the other clips where I had not adjusted either the speed or the volume, I could not hear the audio at all!

A search of the Apple Support forum quickly determined that I was not the only one having this problem.  For example, see http://discussions.apple.com/thread.jspa?threadID=1888183tstart=0 and http://discussions.apple.com/thread.jspa?threadID=1896387tstart=0

Apparently, if you set the speed to other than the Apple Defaults (e.g., 50%, 25%, 12.5%), sometimes the audio volume adjustment is not respected in export, although it works fine when previewed in iMovie.  And sometimes later clips AFTER A TRANSITION do not have their audio if there is an earlier clip somewhere in the movie with one of these non-standard speed adjustments EVEN IF THE AFFECTED CLIP ITSELF DOES NOT HAVE ANY SPEED ADJUSTMENT.  And this occurs on export EVEN IF THE MOVIE PREVIEWS CORRECTLY IN iMOVIE!

This problem appears to occur in all exports, to QuickTime and not to QuickTime, at all speeds, and even to DVD (although I did not test that).

In my case, I was able to workaround the problem by changing the speed of the clip whose audio was still audible to the standard 12.5% (it had the problem when set to 19%), and by deleting two transitions.

Any ideas on how to get Apple to fix its broken iMovie 09?

Posted in Mac | 45 Replies
Konica Minolta Twain Driver Not Recognized Posted on by nachbar 6

I recently had a problem getting the Konica Minolta Twain Driver for the C253 scanner (among others) to be recognized by the twain device manager, and thus it was not listed as one of the twain devices available, either in PhotoShop or in Atalasoft DotTwain.  The nice people at our local Hughes Calihan Konica Minolta here in Phoenix helped me figure this out, along with Lou Franco of Atalasoft (see his comments below), and I wanted to post the solution for anyone having a similar problem.

Ultimately, the problem was that another software package (I believe it was Business Objects Crystal Reports XI Release 2), installed a copy of the LIBEAY32.dll into the C:WindowsSystem32 directory.  LIBEAY32.dll is part of the open source OpenSSL suite, and I have 18 (!) different versions on my system.  They mostly live in harmony, but when the Konica Minolta twain driver tried to load, it would get the version of LIBEAY32.dll that Crystal Reports had put into System32 (since that is very early in the Dynamic Link Library Search Order see http://msdn.microsoft.com/en-us/library/ms682586.aspx) and when the LIBEAY32.dll that was loaded did not have the proper ordinal entry point, the Konica Minolta twain driver would not be loaded by the twain device manager.

When PhotoShop loaded, it would emit an error message about the missing ordinal in LIBEAY32.dll; when File/Import was pulled up in the menu, the Konica Minolta twain device would just be missing, and there would be no error here.

Compounding the problem was that my test application using the Twain source manager via the Atalasoft DotTwain ShowSelectSource() function did NOT issue any error.

However, a test application I made with Visual C++ loading the Twain device source library for the Konica Minolta scanner did produce the error.

It turns out that the only difference between my test application and Photoshop was the SetErrorMode() function, which sets the process ErrorMode. You can call GetErrorMode() and SetErrorMode() following these imports:
[DllImport("Kernel32.dll")]
private extern static uint SetErrorMode(uint mode);

[DllImport("Kernel32.dll")]
private extern static uint GetErrorMode();

If you then call SetErrorMode(0) before the Atalasoft ShowSelectSource() function, the user DOES see the error messages from the operating system. However, the Twain Source Manager twain_32.dll does not return any error code to ShowSelectSource(), so obviously ShowSelectSource() cannot return any error code either. As noted below, the only way for a calling program to get an indication that a source did not load is to call the twain source DLL directly rather than through the Twain Source Manager, and observe that the LoadLibrary call returns NULL.

Having figured out that the problem was that the Business Objects LIBEAY32.dll was in the System32 directory, the solution was a little difficult.  The Konica Minolta Twain Driver worked once the LIBEAY32.dll was removed  (or renamed) in System32, but Crystal Reports XI Release 2 tries to repair its installation if if finds that file missing. 

However, by placing the LIBEAY32.dll from the Konica Minota twain driver directory (in a subdirectory of C:windowstwain_32, where the twain device files live) into System32, both the Konica Minota twain driver and Crystal Reports seem to be happy.  For good measure, I put a copy of the LIBEAY32.dll that Crystal had put into System32 into Crystals own directory (since that has higher priority in the .dll search order) so that Crystal should load its own LIBEAY32.dll

For reference, I tracked down the problem by making a test Visual Studio C++ app and trying to load the Konica Minolta twain device driver (mostly from http://msdn.microsoft.com/en-us/library/784bt7z7.aspx) :

#include windows.h

Then, in the click handler:

HINSTANCE hDLL; // Handle to DLL
UINT uErrorMode = SetErrorMode(0); // so you get an error message from the OS
LPCWSTR str=L"C:\Windows\twain_32\KONICA MINOLTA\RTM_V3\kmtw3R.ds";
hDLL = LoadLibrary(str);

The LoadLibrary call produces a MessageBox (see the SetErorrMode() docs) with the error, and returns NULL, if there is a problem loading the twain source driver library. Note that the twain device driver will need other files in that directory, and you will get those errors first; you can fix that problem by adding it to the PATH for testing. The System32 will still be ahead of the PATH (but not ahead of the application .exe directory) so you will get the error message you are looking for. Also note that the twain device driver library, in actual use, will NOT need the PATH to be set; the twain device manager appears to take care of that.

Another approach that works is to change the current working directory to the directory containing the twain source driver before calling LoadLibrary on the driver, as this will more closely approximate the DLL Search Order used by the twain source manager. Again, the problem is that, although the source driver does install the files it needs into its own directory, the LIBEAY32.dll that Crystal installs into System32 is still AHEAD of the LIBEAY32.dll installed into the source drivers directory! (see Lou Francos comments below) DLL Search Order is a fairly complex topic, and can vary depending on a number of factors; google Dynamic Link Library Search Order for info. Note that, unless SafeDllSearchMode is disabled, changing the current working directory does not change the DLL search order.

Also, when I tried this with a 64 bit version of Vista, Crystal installs LIBEAY32.dll into /Windows/SysWOW64, which is the directory that takes the place of /Windows/System32.

Regretably, when LoadLibrary fails, FormatMessage produces only a message that the operating system could not run the file. The only detailed info available seems to be the message box provided directly to the user by the OS, and only when SetErrorMode(0) is in effect.

See also: http://www.cube316.net/blog/archives/200710/147.html for a similar problem.

Edited 12/31/08 10PM to add info about using SetErrorMode() to show the error message box, that the lack of error reporting to the application occurs at the Twain Source Manager level, to reinforce the info about DLL Search Order, and to take Lou Francos comments into account; Edited 12/3/09 to add info re 64 bit Vista JMN

Posted in Programming | Tagged Atalasoft, Crystal Reports, Konica Minolta, Twain | 6 Replies
Ruby on Rails 2.3 and PostgreSQL on Ubuntu Hardy 8.04 LTS and 10.04 LTS Server Posted on by nachbar 11
Update: A few changes for 10.04 LTS, using PostgreSQL 8.4

When running rails (other than rails version), I got the error No such file to load: net/https. That was fixed by installing libopenssl-ruby, as in:
aptitude install libopenssl-ruby
This should probably be done before installing rails, although installing it after rails was installed fixed the problem

gem update system produces a message that gem update system is disabled on Debian. RubyGems can be updated using the official Debian repositiries by aptitude or apt-get

Instead, I found the following at https://help.ubuntu.com/community/RubyOnRails

sudo gem install rubygems-update
sudo update_rubygems note: this will clean out your gems!

Note: I had to reinstall rails after update_rubygems, which I ran after I had installed rails. I would probably do this before installing rails.

The -y flag is now the default, and if you use it you get a message to that effect

irb and apache2 were already installed by the time I got to those steps. There is an apache2 metapackage that I would probably use instead of the apache 2.2 packages noted below if I still needed to install apache2.

Before you can run the programs you have installed with gem (e.g. Rails), you will need to add:
export PATH=/var/lib/gems/1.8/bin:$PATH

When I ran the Passenger installation, I got a message to install three more packages:

aptitude install apache2-prefork-dev
aptitude install libapr1-dev
aptitude install libaprutil1-dev

However, only the first one of those actually did anything. Since the passenger installation gives good diagnostics, it is reasonable to let that tell you what still needs to be installed.

Following the instructions on the Passenger install for configuring Apache, the sample configuration included some inline comments with # these caused an error in Apache2 and had to be moved to a separate line.

Passenger may need a file named .htaccess to be installed in the /public directory of your rails app, with the following two lines:

PassengerEnabled on
PassengerAppRoot /full/path/to/the/root/of/your/rails/app

The PassengerAppRoot should NOT be your rails apps public directory, but the .htaccess file needs to be in that public directory. The Passenger docs incorrectly state that the PassengerAppRoot is assumed to be the parent of the public directory, but that is only true if the public directory is named in DocumentRoot, and not if you are using an alias.

Also, if you are using an alias and the Rails app is not in the root of the website, you may need config.action_controller.relative_url_root = "/test" in your config/environment.rb file

Also note that, except where noted, the installation commands need to be run as root (sudo su -) or with sudo.There has been much confusion and consternation about setting up Ruby on Rails with PostgreSQL

(e.g., see: http://joshkim.org/2008/10/26/postgresql-ruby-and-rails-i-quit)

There seems to be a lot of support for running this on a Mac, but less so for running it on modern Ubuntu. There are several moving parts here, so once I had figured them out, I wanted to record my notes to save others some of the same aggravation.

Note that there are some other issues and differences between MySQL and PostgreSQL for example see: http://blog.tiagocardoso.eu/rubyonrail/2008/02/20/porting-to-postgres-on-rails/

In particular, one difference noted there between PostgreSQL and other SQLs is that PostgreSQL is stricter about the difference between single and double quotes.  Double quotes are for delimited identifiers, such as table and column names, and prevent them from being mistaken for keywords.  For example, SELECT could be the name of a table or column or variable, whereas SELECT is an SQL keyword.  Single quotes are for string constants.  Use two adjacent single quotes for a literal single quote, as in Diannes horse.  Where this will get you is if you use double quotes in :conditions= and :joins=, which will work in MySQL but not PostgreSQL.  Another difference is that like may need to be changed to ilike in PostgreSQL if you want case insensitive queries.

This post doesnt attempt to address all issues, but just to get a system from a base Ubuntu Hardy (8.04 LTS) to a working Ruby on Rails 2.2/PostgreSQL 8.3 system.  This will also install working sqlite3 and postgresql drivers, and will test the installation as we proceed.

It also doesnt attempt to address migration of data; do a web search on mysql postgresql yml to see several alternatives here.

(Some of these installation instructions are modified from Agile Web Development with Rails, third edition beta, which I assume you already have)

apt-get update
apt-get upgrade
aptitude install build-essential

if aptitude is not installed, that will cause an error.  Install with:

apt-get install aptitude

Now:

aptitude install ruby rubygems ruby1.8-dev libsqlite3-dev
gem update --system

At the end of a lot of output, was the notice that

RubyGems installed the following executables:

/usr/bin/gem1.8
If 'gem' was installed by a previous RubyGems installation, you may need to
remove it by hand

In my case, I did have to remove the old gem file by hand:

mv /usr/bin/gem /usr/bin/gem.old
mv /usr/bin/gem1.8 /usr/bin/gem

If you get the error about the uninitialized constant Gem::GemRunner(NameError), this is your problem

Then:

gem install -y rails

if you get an error that could not find rails (0) in any repository, simply try again

gem install -y rails

To use irb, you need:

aptitude install irb

if you want git:

aptitude install git-core git-doc

if you want apache:

aptitude install apache2.2-common

For passenger:

gem install passenger
passenger-install-apache2-module

You may get some instructions about additional software to install for the passenger apache2 module to be compiled.  You will also get some instructions for configuring passenger to work under apache2.  Be aware that, with Ubuntu, you are encouraged NOT to edit the apache2.conf file, which may need updating with a new version of Ubuntu, but rather to edit other files included by apache2.conf, such as httpd.conf and the sites-available files (linked into sites-enabled when you want them to be enabled).

To use sqlite3 (e.g., for initial testing)

gem install sqlite3-ruby

For PostgreSQL:

aptitude install postgresql postgresql-client

Now, in order to access PostgreSQL, you need to have a PostgreSQL user defined, as well as a PostgreSQL database defined.

The PostgreSQL installation creates the postgres Linux user, the postgres PostgreSQL user, and the postgres database, so to get into the database, you can just (from root):

su postgres
psql

and poke around (psql has pretty good help use l to list databases, du to list users, ? for help, and q to quit.)

Exit psql with q

To create a PostgreSQL user so you can test rails with PostgreSQL (in my case, I created user nachbar, since that is my Linux username) FROM THE SHELL (not from psql):

su postgres
createuser nachbar

(answer y to the question about being a superuser)

If you get an error that, for example Ident authentication failed for user xxxx , that means you forgot the su postgres.  Ident authentication means that PostgreSQL will allow Linux user postgres in because there is also a PostgreSQL user postgres

Once you have created your user (in my case, nachbar), AS THAT USER, try:

psql postgres

Here, postgres is the DATABASE name to which you are connecting.  If you dont specify a database name, psql will try to connect to a database with the same name as your username, which does not exist.  (try just psql here to see that error)

Once you have psql working and your user set up in PostgreSQL, create a test rails application and test sqlite3 as your own user (i.e., not root):

rails test
cd test
script/generate model product title:string
rake db:create
rake db:migrate
script/console
t=Product.first
(that should return nil, since there are no products saved yet)
p=Product.new
p.title="My Title"
p.save
t=Product.first
t.title

The last command should read you  back My Title from your saved Product

Now, exit the console, and switch your app to PostgreSQL

exit

edit config/database.yml:

under development:, change adapter to postgresql and database to test_development.  No need to set a username, password, or anything else

Install the postgresql adaptor (as root)

First: install the postgreSQL header files:

aptitude install libpq-dev
gem install postgres

Then, test it:

irb
require 'rubygems'
require 'postgres'

Now, (back as your own user, not root, and in the rails test project directory): create the PostgreSQL database:

rake db:create
rake db:migrate

Test that these were created in PostgreSQL:

psql test_development
l  (to list databases)
dt  (to list tables - should include the products table)
q  (to exit psql)

Run the same script/console test above, which should give the same results as it did with sqlite3.

Check the PostgreSQL database:

psql test_development
select * from products;
(dont forget the semicolon.  Should show your your My Title product, now in PostgreSQL)
q

Rails is running with PostgreSQL!

Note that we did not set a user or password in database.yml, because we had created the nachbar user as a PostgreSQL superuser, and that was the user that script/console and rake were running as.  We used Ident authentication in this case.  There are several choices here, including creating another PostgreSQL user under which Rails will run.  Since nachbar is now a PostgreSQL superuser, you can run the createuser command as nachbar or postgres, but not as root!  In PostgreSQL, if the password is null, password authentication will always fail.

Other miscellaneous notesPostgreSQL configuration notes:

PostgreSQL is set up to allow multiple clusters.  Installation creates a single cluster, main, which will probably be all you need.  In the following, main could refer to multiple directories if you have multiple clusters.  Also 8.3 is my PostgreSQL version number.  Other versions will, of course, have different directory names.

PostgreSQL configuration goes into /etc/postgresql/8.3/main and /etc/postgresql-common

PostgreSQL bin is in /usr/lib/postgresql/8.3/bin .  That directory is NOT added to the PATH, but appropriate links for psql, createuser, etc. are placed into /usr/bin.  Other commands, such as pg_ctl may not be in the path.  The base path for the Ubuntu bash shell is set in /etc/login.defs file in the ENV_SUPATH and ENV_PATH vars

The data directory is /var/lib/postgresql/8.3/main see /var/lib/postgresql/8.3/main/postmaster.opts

According to /etc/init.d/postgresql-8.3, environment vars are set in /etc/postgresql/8.3/cluster/environment

possible options to /etc/init.d/postgresql-8.3 are:

start, stop, restart, reload, force-reload, status, autovac-start, autovac-stop, autovac-restart

(the functions are sourced from /usr/share/postgresql-common/init.d-functions)

On init, the init.d script looks for directories in /etc/postgresql/version (by default, main exists there) then, in those directories, look for postgresql.conf, which is the file that sets the data directory (/var/lib/postgresql/8.3/main), and the hba_file and ident_file (in /etc/postgreql/8.3/main), port, etc., as well as all sorts of configuration FOR THE SERVER

start.conf determines whether the specific server gets started on bootup

to backup:

pg_dumpall outputfile

to stop the server:

pg_ctl stop

some samples in

/usr/share/postgresql/8.3

Rake and Rails data on PostgreSQL

The postgresql database driver supports rake commands such as


rake db:drop
rake db:create
rake db:schema:load RAILS_ENV=production

Be aware that PostgreSQL does not use autoincrement fields, but rather implements a more structured system using PostgreSQL sequences. Rails will create these for you, and will tell you about it. Thus, the rake db:schema:load will produce messages like:


-- create_table("appts", {:force=true})
NOTICE: CREATE TABLE will create implicit sequence "appts_id_seq" for serial column "appts.id"
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "appts_pkey" for table "appts"

Those notices tell you that this mechanism is working properly.

Rails, Passenger, and PostgreSQL users

As indicated in the Passenger docs, passenger will run as the user that owns config/environment.rb, but that can be changed as indicated in the User Switching section of the Passenger docs, and can be modified by the PassengerUserSwitching and PassengerDefaultUser options in your httpd.conf or apache2 site files. Whichever user Passenger runs as must have a PostgreSQL user with the appropriate rights. Options include making that user a PostgreSQL superuser, or instituting better access controls with SQL GRANT commands.

In addition, options include other than the Ident mechanism of logging into PostgreSQL that we have discussed above. See the PostgreSQL website for details.

As one example, you can create the group and user passenger to use for passenger to run as, including the PostgreSQL user.:

adduser --system --group passenger

Change the group for the railsapp directory files by cd to the railsapp directory and issuing

chgrp -R passenger *

Change the mode for the log files and directory, so that the group (now passenger) can change those

cd log
chmod g+w .
chmod g+w *

Create the PostgreSQL user passenger:

su postgres
createuser passenger

(answer n to all three questions: superuser, create databases, create roles)

Grant access to the passenger PostgreSQL user:

su postgres
psql
c myrailsapp_production
grant all on audits, sessions, users, mymodel1s, mymodel2s to passenger;
grant all on sequence audits_id_seq, sessions_id_seq, users_id_seq, mymodel1s_id_seq, mymodel2s_id_seq to passenger;
q

Either change the owner of config/environment.rb to passenger, or set PassengerDefaultUser to passenger

Now Passenger will run as the passenger user, and will also set the effective group to the default group of the passenger user (also passenger, in this setup). It will access PostgreSQL as the PostgreSQL passenger user, as well, using ident authentication.  Of course, ident authentication works only within a single machine.  To access PostgreSQL from another machine, set the hostname, username, password, and port in Rails.

touch tmp/restart.txt to restart Passenger on the next request.

Setting timezone on Ubuntu (different than setting it for your Rails app)

ln -sf /usr/share/zoneinfo/America/Phoenix /etc/localtime

Setting up the mail server on Ubuntu so Action Mailer Works:

Mail: exim4 was already running, but would not deliver except locally.  Make changes to /etc/exim4/update-exim4.conf.conf esp change configtype to internet (so mail can go out to the internet) but leave local_interfaces at 127.0.0.1 so mail will be accepted only from the local system.  Also change readhost to myhostnamehere.com so headers show that as the origin, and hide_mailname so readhost works.  Also, change /etc/mailname to mydomainnamehere.com, to indicate the domain of the user sending the mail.

The Virtual Server

To reproduce what I have done, I Actually implemented the above on the 11 VPS I Linux package imaged to Ubuntu 8.04 LTS (64 bit). I think you can get a discount on that if you click this link:

Happy Hacking!
Posted in Programming, Web Programming | Tagged PostgreSQL, Ruby on Rails Programming, Ubuntu, Web Programming | 11 Replies
Flash Player Bug with RoR 2:HTTPService fires fault by http status code 201 Posted on by nachbar 2

Regarding Flexible Rails: Flex 3 on Rails 2, by Peter Armstring, and its Forum

This relates to a previous thread, but the solution is buried deep within the thread. There is a bug in Flash Player, which has been reported:

http://bugs.adobe.com/jira/browse/SDK-14153

Adobe considers this bug report closed with the resolution of cannot fix. Basically, Flash Player HTTPService incorrectly faults on status code 201, which indicates successful creation. The Rails 2 scaffolding code returns this status code 201 on successful creation, triggering the fault event from HTTPService, and preventing the code in CreateLocationCommand.as on page 318 (for example) from working.

Since Adobe has given up on fixing this error, a workaround is required. One workaround would be to intercept the fault event, locate the status code 201, and treat it as success. However, I cannot find the status code in the fault event (!). You could also just treat the fault as a success, but then you wouldnt know whether the create was successful.

The best workaround seems to be to change the status code returned from 201 to 200. This can be done in the rails controller. In this case, using iteration 8 code, pomodo/app/controllers/locations_controller.rb, line 55, change :created to :ok and CreateLocationCommand.as will work again.

James Nachbar
http://www.plastic.org

Posted in Web Programming | Tagged Adobe Flex Programming, Ruby on Rails Programming | 2 Replies
Flex-Rails:protect_from_forgery problem with Rails 2.1 produces ioError 2032 Posted on by nachbar 2

Update for Rails 2.2: According to the release notes: Request forgery protection has been tightened up to apply to HTML-formatted content requests only in Rails 2.2 I have not tested this, but it should obviate the problem addressed in this post for Rails 2.2 and newer.

Regarding Flexible Rails: Flex 3 on Rails 2, by Peter Armstrong:

The book talks about commenting out protect_from_forgery, and then uncommenting it in iteration 5 without mentioning what had changed to allow protect_from_forgery to be used.

In reviewing old vs. new rails code (particularly vendor/rails/actionpack/lib/action_controller/request_forgery_protection.rb), it appears that the older versions of rails did not run the forgery protection check for .xml requests, but the newer versions do. Thus, unless you are manually adding the appropriate parameters (see the above file for the current test being done to see if the form request is forged), you will fail the forgery test unless you prevent the test from running. More info on that here:

http://ryandaigle.com/articles/2007/9/24/what-s-new-in-edge-rails-better-cross-site-request-forging-prevention

at a minimum you will need:
skip_before_filter :verify_authenticity_token
in your sessions_controller.rb to avoid the ioError 2032.

You can track this error down by adding a fault event handler to the HTTPService (e.g. in LoginBox.mxml on page 153). You can also look at the output from the server (the ruby scriptserver command) which will show status code 422 instead of 200 for the session.xml request.

For a more detailed look, go to the rails log at logdevelopment.log and look at the end for the most recent error. It will show that ActionController::InvalidAuthenticityToken was thrown by /vendor/rails/actionpack/lib/action_controller/request_forgery_protection.rb:86:in `verify_authenticity_token

CSRF attacks are not so relevant for applications running within Flash Player (as opposed to, for example, applications running within a browser), since Flash Player wont go from one site to another.

If you want to continue to use forgery protection for the .html requests, the best solution is to

1) uncomment protect_from_forgery (so the protection token is generated),

2) skip_before_filter :verify_authenticity_token in the controllers that need to allow .xml to be served without the forgery protection, and then

3) call verify_authenticity_token (the same call used by request_forgery_protection.rb) within the .html generation code that you want to protect. verify_authenticity_token will throw the InvalidAuthenticityToken exception if the token is not correct.

If you want to protect your .xml calls too, the check within verify_authenticity_token is:
form_authenticity_token == params[request_forgery_protection_token]
so you would need to get your rails app to send the form_authenticity_token to the Flex client when the session is created, and then your subsequent calls will need to set the request_forgery_protection_token param.

James Nachbar
http://www.plastic.org

Posted in Web Programming | Tagged Adobe Flex Programming, Ruby on Rails Programming | 2 Replies
Flex-Rails: Non-Debug Flash Player caches, so fails to update list status code 304 Posted on by nachbar Reply

Regrading Flex/Ruby on Rails Programming:

And then just when everything was working in the debug Flash Player, I decided to fire-up IE run the application in Flash Player in non-debug mode, and it stopped working: after creating an item, the list blanked out rather than being updated.

Ultimately, the problem was that, in non-debug mode, using IE (but apparently not Firefox), Flash issued a conditional get, and was getting a 304 not modified response instead of the updated data. In debug mode, Flash was issuing a regular GET, and thus got the correct info. Thus, the application worked in debug mode, but not in non-debug mode.

I have seen that RoR 2.1 included some new caching functionality, although I dont know if this is the kind of caching they are talking about, or why rails was reporting not modified even after the database upon which the response was based had been modified..

That Rails was returning status code 304 could be seen in the server window (ruby scriptserver)

For some reason, even though I am creating a new HTTPService object for each call, the return from the POST (i.e., the one object being created) was still being returned in the result event when I sent a GET to obtain the entire list. I could determine that by sending the result event info from the list command to the debug window:

var x:XMLList = XMLList(event.result.children());
Pomodo.debug(x);

Even though this was the result of the GET call, I was still getting the result of the POST.

My fix (actually more of a workaround) was to add a time-generated string (? + Number(new Date()) ) to the end of the request URI, thus avoiding the caching problem. A better solution might be to send a no-cache header from the RoR portion, although I have not tested that. More on avoiding caching here:

http://www.ruby-forum.com/topic/76658

More evil IE caching, I guess!

James Nachbar
http://www.plastic.org

Posted in Programming, Web Programming | Tagged Adobe Flex Programming, Ruby on Rails Programming | Leave a reply
Recent Posts C++ Builder XE3 Declaration terminated incorrectly System.ZLib.hpp Gnostice eDocEngine does not work with C++ Builder XE3 TClientDataSet, InternalCalc, AutoIncrement, and Key Violations Installing Gnostice eDocEngine in C++ Builder with TRichView Parallels Desktop VS VMWare Fusion deleting multiple snapshots Recent CommentsJon Caum on Configuring AudioCodes MP-112 VoIP Gateway for Fax with AsteriskAnna on Neopost IJ25 Postage Meter Pricing ScamLaurie C on Neopost IJ25 Postage Meter Pricing ScamRoxanne on Neopost IJ25 Postage Meter Pricing ScamDavid Floyd on Neopost IJ25 Postage Meter Pricing ScamArchives April 2013 February 2013 January 2012 October 2011 July 2011 June 2011 June 2010 February 2010 October 2009 June 2009 March 2009 February 2009 December 2008 November 2008 June 2008 Categories cb2009 cb2010 cbxe Dreamweaver Mac Programming SQL Server Compact Edition Ubuntu Uncategorized VCL Web Programming xe3 Meta Log in Entries feed Comments feed WordPress.org
Proudly powered by WordPress | Theme: Duster by Automattic.

TAGS:Blog Nachbars James Programming Surgery Plastic and 

<<< Thank you for your visit >>>

Websites to related :
Init7 mirror server

  keywords:server mirroring, data mirroring, backup server, network management, server replica, open source
description:Open source mirror server locate

Job Board Doctor Making job boa

  keywords:
description:
skip to Main Content BlogE-Books Job Board Software Buyers Guide Job Board SuccessHire MeResources Supp

De Ruwenberg hotel, meeting en e

  keywords:
description:De Ruwenberg is het grootste zakelijke hotel in de regio Den Bosch. ✓ meer dan 40 unieke zalen ✓ Bijzondere meeting locatie ✓

Ryoni.com - Lifestyle, Fashion M

  keywords:Fashion Models, Fashion Models Videos, sexy girls sexy women, videos, babes videos, Super Models, supermodels, models videos, model videos, h

folk tune finder

  keywords:
description:
Toggle navigation

RU connected - Topkwaliteit Kabe

  keywords:
description:Beste HDMI kabels & Audiokabels - ✓ Al 10 jaar Beste Prijs / Kwaliteit - ✓ 100 Dagen Retourneren - ✓ 5 Jaar Garantie - ✓ Voo

Starting Over About so much

  keywords:
description:
Skip to content Starting Over About so much Menu Close HomeAboutBlogCon

Janus International

  keywords:
description:

DPReviews.com - Digital Photogra

  keywords:
description:DSLR and Digital Photography Cameras and Lenses. Featuring Tests, Reviews, Tips, User Comments, Forums, Tutorials, Videos. CANON

PrepaidOnline.com Instant Prepai

  keywords:cheap phone cards, international phonecards, prepaid phonecards, cheap phonecards, international phone cards, prepaid phone cards, calling ca

ads

Hot Websites