Friday, April 29, 2011

MySQL AND Filemaker Pro?

Hi All,

I have a client that wants to use Filemaker for a few things in their office, and may have me building a web app.

The last time I used, or thought about, or even heard of, Filemaker was about 10 years ago, and I seem to remember that I don't want to use it as the back end of a sophisticated web app, so I am thinking to try to sell them on MySQL.

However, will their Filemaker database talk to MySQL? Any idea how best to talk them down from Filemaker?

From stackoverflow
  • You may have a hard time talking them out of FileMaker, because it was actually a pretty clever tool for making small, in-house database applications, and it had a very loyal user base. But you're right--it's not a good tool for making a web application.

    I had a similar problem with a client who was still using a custom dBase IV application. Fortunately, Perl's CPAN archive has modules for talking to anything. So I wrote a script that exported the entire dBase IV database every night, and uploaded it into MySQL as a set of read-only tables.

    Unfortunately, this required taking MySQL down for 30 minutes every night. (It was a big database, and we had to convert free-form text to HTML.) So we switched to PostgreSQL, and performed the entire database update as a single transaction.

    But what if you need read-write access to the FileMaker database? In that case, you've got several choices, most of them bad:

    1. Build a bi-directional synchronization tool.
    2. Get rid of FileMaker entirely. If the client's FileMaker databases are trivial, this may be relatively easy. I'd begin by writing a quick-and-dirty clone of their most important databases and demoing it to them in a web browser.
    3. The client may actually be best served by a FileMaker-based web application. If so, refer them to Google.

    But how do you sell the client on a given choice? It's probably best to lay out the costs and benefits of each choice, and let the client decide which is best for their business. You might lose the job, but you'll maintain a reputation for honest advice, and you won't get involved in a project that's badly suited to your client.

  • I've been tackling similar problems and found a couple of solutions that emk hasn't mentioned...

    1. FileMaker can link to external SQL data sources (ESS) so you can use ODBC to connect to a MySQL (or other) database and share data. You can find more information here. we tried it and found it to be pretty slow to be honest
    2. Syncdek is a product that claims to allow you to perform data replication and data transmission between Filemaker, MySQL and other structured sources.
    3. It is possible to use Filemaker's Instant Web Publishing as a web service that your app can then push and pull data through. We found a couple of wrappers for this in python and php
    4. you can put a trigger in the FileMaker database so that every time a record is changed (or part of a record you are interest in) you can call a web service that updates a MySQL or memcached version of that data that your website can access.

    I found that people like FileMaker because it gives them a very visual interface onto their data - it's very easy to make quite large self-contained applications without too much development knowledge. But, when it comes to collaboration with many users or presenting this data in a format other than the FileMaker application we found performance a real problem.

  • We develop solutions with both FileMaker and PHP/MySQL. Our recommendation is to do the web app in a web app optimised technology like MySQL.

    Having said that, FileMaker does have a solid PHP API so if the web app has relatively lightweight demands (e.g. in house use) then use that and save yourself the trouble of synchronisation.

    FileMaker's ESS technology let's FileMaker use an SQL db as the backend data source, which gives you 2 options:

    1. Use ESS as a nice tight way to synchronise right within FileMaker - that way you'd have a "native" data source to work with within the FileMaker solution per se.

    2. Use ESS to allow FileMaker to be used as a reporting/data mining/casual query and edit tool directly on the MySQL tables - it works sweet.

    We've found building a sophisticated application in FileMaker with ESS/MySQL backend to be very tricky, so whether you select 1 or 2 from above depends on how sophisticated and heavy duty that FileMaker usage is.

    Otherwise, SyncDek has a good reputation as a third party solution for automating Synchronisation.

Detecting when Iframe content has loaded (Cross browser)

I'm trying to detect when an iframe and its content have loaded but not having much luck. My application takes some input in text fields in the parent window and updates the iframe to provide a 'live preview'

I started with the following code (YUI) to detect when the iframe load event occurs.

$E.on('preview-pane', 'load', function(){
    previewBody = $('preview-pane').contentWindow.document.getElementsByTagName('body')[0];
}

'preview-pane' is the ID of my iframe and I'm using YUI to attach the event handler. However, trying to access the body in my callback (upon iframe load) fails, I think because the iframe loads before the event handler is ready. This code works if I delay the iframe loading by making the php script that generates it sleep.

Basically, I'm asking what is the correct approach across browsers to detect when the iframe has loaded and its document is ready?

From stackoverflow
  • See this blog spot. It uses jQuery, but it should help you even if you are not using it.

    David Caunt : Interesting, but the problem I have is with the load events and timing. I am listening for the load event as advised by that article.
  • to detect when the iframe has loaded and its document is ready?

    It's ideal if you can get the iframe to tell you itself from a script inside the frame. For example it could call a parent function directly to tell it it's ready. Care is always required with cross-frame code execution as things can happen in an order you don't expect. Another alternative is to set ‘var isready= true;’ in its own scope, and have the parent script sniff for ‘contentWindow.isready’ (and add the onload handler if not).

    If for some reason it's not practical to have the iframe document co-operate, you've got the traditional load-race problem, namely that even if the elements are right next to each other:

    <img id="x" ... />
    <script type="text/javascript">
        document.getElementById('x').onload= function() {
            ...
        };
    </script>
    

    there is no guarantee that the item won't already have loaded by the time the script executes.

    The ways out of load-races are:

    1. on IE, you can use the ‘readyState’ property to see if something's already loaded;

    2. if having the item available only with JavaScript enabled is acceptable, you can create it dynamically, setting the ‘onload’ event function before setting source and appending to the page. In this case it cannot be loaded before the callback is set;

    3. the old-school way of including it in the markup:

      <img onload="callback(this)" ... />

    Inline ‘onsomething’ handlers in HTML are almost always the wrong thing and to be avoided, but in this case sometimes it's the least bad option.

    David Caunt : Thanks. My solution checks the readyState (if exists), then the body elements innerHTML length to see if it has loaded. If not, attaches the load event handler. Seems to work ok

What's a good brief introduction to Mono/.NET programming?

Hello, I am quite impressed by the Mono features, especially by the portability of the library. I think I'll give it a try soon or later and I'd need some directions to find an introduction to Mono/.NET programming. Can you help me?

Thank you

From stackoverflow
  • Mostly Mono programming is not any different classical .NET programming.

    You can only use C# in a decent way, non of the other .NET languages. Even though Mono claims VB.NET support it's far from being a reasonable compiler or reasonable IL support. VB.NET mostly experimental.

    If your main OS is Windows easiest way to start downloading the Mono VmWare - http://www.go-mono.com/mono-downloads/download.html - openSuse

    Enviroment is ready kick start Mono development.

    Try Mono Start page for tips

    Don't forget you can't use COM.

    Best way to proceed code it in Mono (IDE is terrible after VS.NET) then porting it Windows.

    jpobst : Although you cannot use COM, you can use the winforms WebBrowser Control, which is implemented using Mozilla's Gecko. VB.Net is unfortunately still at the VB.Net 8 version, not the VB.Net 9 version.
    dr. evil : @jpobst I didn't know WebBrowser Control implementation, that's kind of cool :) VB.NET 8 version is not the problem the problem is compiler is seriously bad and support VB DLLs is a bit flaky. You can't code in compiler if it gives an "Unknown Error" without a line number!
    jpobst : Ah, that would suck. I did not know the VB.Net compiler was that bad. On the plus side, you can compile with MS's VB compiler and use that on Mono. However, that will not get you around missing stuff in the VB support dll.
    dr. evil : I really love the idea of Mono, it's such a hard job and quite impressive. Don't want to disrespect the developers. So I keep my hopes high :) Hopefully they'll fix those show stoppers soon.
  • I found Petzold's free ebook .NET Book Zero useful as an introduction to .Net programming.

  • Great Recommendations by Slough. I've found the Mono Getting Started Guide and the Wikipedia page very helpful in understanding it's history, how it works, and how to get started:

Overload "base" contructor or "this" contructor?

I have few types that derive from simplified Base as shown below.

I am not sure whether to use base class's constructor or this constructor when overloading constructors.

ConcreteA overloads constructors purely using base constructors, while
ConcreteB overloads using this for the first two overloads.

What would be a better way of overloading constructors?

public abstract class Base
{
    public string Name { get; set; }
    public int? Age { get; set; }

    protected Base() : this(string.Empty) {}
    protected Base(string name) : this(name, null) {}
    protected Base(string name, int? age)
    {
        Name = name;
        Age = age;
    }
}

public class ConcreteA : Base
{
    public ConcreteA(){}
    public ConcreteA(string name) : base(name) {}
    public ConcreteA(string name, int? age) : base(name, age)
    {
    }
}

public class ConcreteB : Base
{
    public ConcreteB() : this(string.Empty, null){}
    public ConcreteB(string name): this(name, null){}
    public ConcreteB(string name, int? age) : base(name, age)
    {
    }
}

[Edit] It looks like what Ian Quigley has suggested in his answer seemed to make sense. If I were to have a call that initialize validators, ConcreteA(string) will never initialize validators in following case.

public class ConcreteA : Base
{
    public ConcreteA(){}
    public ConcreteA(string name) : base(name) {}
    public ConcreteA(string name, int? age) : base(name, age)
    {
        InitializeValidators();
    }
    private void InitializeValidators() {}
}
From stackoverflow
  • In your case from what you have provided it doesn't matter. You really only want to use this when you have a constructor in your current class that is not part of your base class, or if there is some code in the current class constructor that you want to execute that isn't contained in the base class.

  • This. Because if you ever place code in ConcreteB(string, int?) then you want the string only constructor to call it.

    Sung Meister : This seems to make sense if I were to have other initializations going on in concrete constructors.
    Dead account : Yes, and "this" will always call "base" at the end of the day. So even if "this" does nothing, it'll drop down to "base"
  • In general, I'd call "this" rather than "base". You'll probably reuse more code that way, if you expand your classes later on.

  • In order to reduce the complexity of the code paths, I usually try to have exactly one base() constructor call (the ConcreteB case). This way you know that the initialization of the base class always happens in the same fashion.

    However, depending on the class you override, this may not be possible or add unneeded complexity. This holds true for special constructor patterns such as the one when implementing ISerializable.

  • It is fine to mix and match; ultimately, when you use a this(...) constructor, it will eventually get to a ctor that calls base(...) first. It makes sense to re-use logic where required.

    You could arrange it so that all the constructors called a common (maybe private) this(...) constructor that is the only one that calls down to the base(...) - but that depends on whether a: it is useful to do so, and b: whether there is a single base(...) ctor that would let you.

  • Ask yourself again why you are overloading the constructor in the Base class? This one is enough:

    protected Base()
    

    Same goes for the subclass unless you need either fields to have a particular value when you instantiate which in your example is not the case since you already have the default constructor.

    Also remember that any constructor should put the instance of the object in a correct state.

Implementing variable constraints in C++

I've been looking for an example that shows how to implement constraints in C++ (or a boost library that lets me do this easily), but without much luck. The best I could come up with off the top of my head is:

#include <boost/function.hpp>
#include <boost/lambda/lambda.hpp>

template<typename T>
class constrained
{
    public:
     constrained(boost::function<bool (T)> constraint, T defaultValue, T value = defaultValue)
     {
      ASSERT(constraint(defaultValue));
      ASSERT(constraint(value));

      this->value = value;
      this->defaultValue = defaultValue;   
      this->constraint = constraint;      
     }

     void operator=(const T &assignedValue)
     {
      if(constraint(assignedValue))
       value = assignedValue;  
     } 

    private:
     T value;
     T defaultValue;
     boost::function<bool (T)> constraint;
};

int main(int argc, char* argv[])
{
    constrained<int> foo(boost::lambda::_1 > 0 && boost::lambda::_1 < 100, 5, 10);

    foo = 20; // works
    foo = -20; // fails

    return 0;
}

Of course there's probably some more functionality you'd want from a constraint class. This is just an idea for a starting point.

Anyway, the problem I see is that I have to overload all operators that T defines in order to make it really behave like a T, and there is no way for me to find out what those are. Now, I don't actually need constraints for that many different types, so I could just leave out the template and hard code them. Still, I'm wondering if there's a general (or at least more succint/elegant) solution or if there's anything seriously wrong with my approach.

From stackoverflow
  • I might just be confused, but if you are facing parameters that must not violate specific constraints, wouldn't it be easiest to create a class for them, checking for constraints in constructors and assignment operators?

    drby : I could do that of course, but I have lots of different parameters with lots of different constraints, so the result would be a lot of brittle boilerplate code. In the above solution, all the logic is in a single variable declaration. I'd say that's better.
  • Looks good as for tiny example. But be sure to implement all the operators and handle somehow wrong values.

    foo = 100; // works
    ++foo; // should throw an exception or perform an assert
    

    Use boost operators to help you with operators overload.

    And probably it would be good to have an option as a template parameter: either exception or assertion.

    I'd use such class. It is always better to have an index parameter that auto check vector range and do assertion.

    void foo( VectorIndex i );
    
    DevSolar : "It is always better to have an index parameter that auto check vector range and do assertion." - how about at()?
    Mykola Golubyev : @DevSolar: a) at() throws an exception; b) such VectorIndex can be implemented for any legacy Array in the project.
  • I agree with Mykola Golubyev that boost operators would help.

    You should define all the operators that you require for all the types you are using.

    If any of the types you are using don't support the operator (for example the operator++()), then code that calls this method will not compile but all other usages will.

    If you want to use different implementations for different types then use template specialisation.

  • You don't need to overload all operators as others have suggested, though this is the approach that offers maximum control because expressions involving objects of type constrained<T> will remain of this type.

    The alternative is to only overload the mutating operators (=, +=, -=, *=, /=, %=, &=, |=, ^=, <<=, >>=, pre and post ++, pre and post --) and provide a user-defined conversion to T:

    template<typename T>
    class constrained {
        ... // As before, plus overloads for all mutating operators
    public:
        operator T() const {
            return value;
        }
    };
    

    This way, any expression involving a constrained<T> object (e.g. x + y where x is int and y is constrained<int>) will be an rvalue of type T, which is usually more convenient and efficient. No safety is lost, because you don't need to control the value of any expression involving a constrained<T> object -- you only need to check the constraints at a time when a T becomes a constrained<T>, namely in constrained<T>'s constructor and in any of the mutating operators.

  • Boost.Constrained_Value may be of interest to you. It was reviewed last December, but it is not in the latest Boost release. IIRC, the review was mostly positive, but the decision is still pending.

  • Boost actually had such a library under discussion (I don't know what became of it). I've also written my own version of such a type, with slightly different behaviour (less flexible, but simpler). I've blogged an admittedly somewhat biased comparison here: Constrained vs. restricted value types

    Edit: apparently Eric knows better what happened to boost's implementation.

How do I get the server endpoint in a running flex application?

I need a way of getting the active server address, port, and context during runtime from my flex application. Since we are using ant for our build process, the server connection information is dynamically specified in our build properties file, and the {server.name}, {server.port} and {context.root} placeholders are used in the services-config.xml file instead the actual values.

We have some other Java servlets running on the same machine as our blazeDS server, and I'd like some way to programmatically determine the server endpoint information so I don't need to hardcode the servlet URL's into an XML file (which is what we are presently doing).

I have found that I can at least get the context root by adding the following to our main application MXML file:

<mx:Application ... >
  <mx:HTTPService id="contextRoot" rootURL="@ContextRoot()"/>
</mx:Application>

However, I still need some way of fetching the server address and port, and if I specify the entire address by giving -context-root=http://myserver.com:8080/mycontext, then the flex application attempts to connect to http://localhost/http://myserver.com:8080/mycontext/messagebroker/amf, which is of course totally wrong. What is the proper way to specify the context root and server URL, and how can I retrieve them from our application?

From stackoverflow
  • Why not call a javascript function in the wrapper via ExternalInterface to return the value of location.hostname?

    <mx:Script>
        <![CDATA[
            private var hostname:String;
    
            private function getHostName():void
            {
                hostname = ExternalInterface.call(getHostName);
            }
        ]]>
    </mx:Script>
    

    javascript in wrapper:

    <script type="text/javascript">
        function getHostName()
        {
            return location.hostname;
        }
    </script>
    
    Nik Reiman : That's not what I'm asking. Plus, you can get this just as easily through Application.application.url and parsing the string.
  • You can use the BrowserManager to get the information about the url.

    var bm:IBrowserManager = BrowserManager.getInstance();
    bm.init(Application.application.url);
    var url:String = bm.base;
    

    see also http://livedocs.adobe.com/flex/3/html/deep_linking_7.html#251252

  • We use an Application subclass that offers the following methods :

     /**
      * The URI of the AMF channel endpoint. <br/>
      * Default to #rootURI + #channelEndPointContext + #this.channelEndPointPathInfo
      */
     public function get channelEndPointURI() : String
     {
        return this.rootServerURI + ( this.channelEndPointContext ? this.channelEndPointContext : "" ) + this.channelEndPointPathInfo
     }
    
     /**
      * The root URI (that is scheme + hierarchical part) of the server the application
      * will connect to. <br/>
      * If the application is executing locally, this is the #localServerRootURI. <br/>
      * Else it is determined from the application #url. <br/>
      */ 
     public function get rootServerURI() : String
     {
          var result : String = ""
          if ( this.url && ( this.url.indexOf("file:/") == -1 ) )
          {
               var uri : URI = new URI( this.url )
               result = uri.scheme + "://" + uri.authority + ":" + uri.port
          }
          else
          {
               result = this.localServerRootURI
          }
    
          return result 
     }
    

    This generic application supports the channelEndPointContext, channelEndPointPathInfo and localServerRootURI properties (typically "mycontext" and "/messagebroker/amf/" in your example, the local server root being used when the application is executed via Flex Builder, in such cases it has a file:// URL).
    The determination of the complete endpoint URI is then performed using either the localServerRootURI property or using the application url as our services are exposed by the very same server that serves the application's SWF (which is, as far as I understand your case too).

    So, in your example, one would write :

     <SuperApplication ...> <!-- SuperApplication is the enhanced Application subclass -->
        <mx:HTTPService id="myHTTPService" url="{this.channelEndPointURI}"/>
     </SuperApplication>
    

    Starting from here, one can also automatically determine the channelEndPointContext from the application URL instead of hardcoding it as shown in this example.

  • I've used FlashVars to pass urls in before with success. In your template html:

    var rootURL = location.href.substring(0,location.href.indexOf("flexBin"));    
    ...
    
    AC_FL_RunContent(
        "src", "${swf}",
        "FlashVars", "rootURL="+rootURL,
        "width", "${width}",
    ...
    

    And then in flex:

    service.rootURL = Application.application.parameters.rootURL;
    

    The nice thing is you can really pass in whatever you like from the server this way.

ensure two char arrays are not the same

I am randomly generating a grid of characters and storing it in a char[,] array ...

I need a way to ensure that i haven't already generated a grid before serializing it to a database in binary format...what is the best way to compare two grids based on bytes? The last thing i want to do is loop through their contents as I am already pulling one of them from the db in byte form.

I was thinking checksum but not so sure if this would work.

char[,] grid = new char[8,8];
char[,] secondgrid = new char[8,8];//gets its data from db
From stackoverflow
  • From what I can see, you are going to have to loop over the contents (or at least, a portion of it); there is no other way of talking about an arrays contents.

    Well, as a fast "definitely not the same" you could compute a hash over the array - i.e. something like:

        int hash = 7;
        foreach (char c in data) {
            hash = (hash * 17) + c.GetHashCode();
        }
    

    This has the risk of some false positives (reporting a dup when it is unique), but is otherwise quite cheap. Any use? You could store the hash alongside the data in the database to allow fast checks - but if you do that you should pick your own hash algorithm for char (since it isn't guaranteed to stay the same) - perhaps just convert to an int, for example - or to re-use the existing implementation:

        int hash = 7;
        foreach (char c in data) {
            hash = (hash * 17) + (c | (c << 0x10));
        }
    


    As an aside - for 8x8, you could always just think in terms of a 64 character string, and just check ==. This would work equally well at the database and application.

  • I'd go with a checksum/hash mechanism to catch a large percentage of the matches, then do a full comparison if you get a match.

    What is the range of characters used to fill in your grid? If you're using just letters (not mixed case, or case not important), and an 8x8 grid, you're only talking about 7 or so possible collisions per item within your problem space (a very rare occurence) assuming a good hashing function. You could do something like:

    1. Generate Grid
    2. Load any matching grids from DB
    3. if found match from #2, goto 1
    4. Use your new grid.
  • Can't you get the database to do it? Make the grid column UNIQUE. Then, if you need to detect that you've generated a duplicate grid, the method for doing this might involve checking the number of rows affected by your operation, or perhaps testing for errors.

    Also, if each byte is simply picked at random from [0, 255], then performing a hash to get a 4-byte number is no better than taking the first four bytes out of the grid. The chance of collisions is the same.

    Marc Gravell : Since this is a char[] (not byte[]), you'd only have time for 2 characters... using a hash algorithm will make better use if the used/unused code-point ranges, and will (in typical use) give a better collision rate than just taking the first two chars.
    Artelius : Well, my unknowledge of C# shines through. But my first point still stands.
  • Try this (invoke ComputeHash for every matrix and compare the guids):

    private static MD5 md5 = MD5.Create();
    public static Guid ComputeHash(object value)
    {
        Guid g = Guid.Empty;
        BinaryFormatter bf = new BinaryFormatter();
        using (MemoryStream stm = new MemoryStream())
        {
            bf.Serialize(stm, value);
            g = new Guid(md5.ComputeHash(stm.ToArray()));
            stm.Close();
        }
        return g;
    }
    

    note: Generating the byte array might be accomplished a lot simpler since you have a char array.

In the JDK 1.6 compiler, what does "-source 1.6" enable, if anything?

I'm aware of the class file format change using -target 1.6 (which is required when you use -source 1.6). But does the source option specifically change anything or enable any new features?

From stackoverflow
  • From the javac documentation:

    1.6 - This is the default value. No language changes were introduced in Java SE 6. However, encoding errors in source files are now reported as errors, instead of warnings, as previously.

    paxdiablo : First three answers identical (including my now deleted one) - I guess the fastest gun wins :-) +1.
    Mark Renouf : I found one additional difference which is not well documented: In JDK 1.6, the @Override annotation became valid to apply to methods which implement interfaces (which do not override a superclass method).
  • From Sun's javac documentation:

    No language changes were introduced in Java SE 6. However, encoding errors in source files are now reported as errors, instead of warnings, as previously.

Migrating client app to FB 2.1

I use Delphi 7 with DBExpress. I want to fully migrate my app to firebird 2.1. I already know what to do at the server side but not really sure at client side.

In the TSQLConnection component I see that vendorLib property points to GDS32.dll. The driverName is Interbase and getDriverFunc is getSQLDriverINTERBASE.

I don't know what to do in order to my connection use fbclient.dll. I tried simply changing gds32.dll to fbclient.dll in vendorLib, but it caused some access violations in my app.

Any tips?

From stackoverflow
  • Use ZeosDB components for accessing FireBird DB.

  • The Interbase DBX driver doesn't support Firebird 2.1 (you'll have problems with certain field types - BLOBs for example). There are rumors that D2010 (which must enter in Beta soon) will support Firebird.

    So, you can expect the Weawer or buy a commercial Firebird DBX driver (see Upscene's site or, DevArt/CoreLab's ones).

    Also, perhaps is better that when you install Firebird to check the 'Generate gds32.dll for compatibility with older applications' - or something similar - I don't have now in front of my eyes the FB installation kit.

    HTH.

  • Maybe you should rename fbclient.dll to gds32.dll and replace it. Read this - it's not exactly your case, but maybe you should try the same way.

  • There is also some dbx drivers for Firebird in UIB in \Misc\dbexpress

    For completing this I know also dbx drivers for Delphi 2007/2009 at IBExpert

    http://ibexpert.net/ibe/index.php?n=Main.News#news16

    and others connectivity to access Firebird with Delphi http://www.firebirdfaq.org/faq7/

How can you create one file from many png files in terminal?

Example of data

06-7-1-B.png
06-7-2-B.png
07-3-1-B.png
07-3-2-B.png
06-6-1-P.png
06-6-2-P.png
07-12-1-P.png
07-12-2-P.png
07-6-1-P.png
07-6-2-P.png
07-7-1-P.png
07-7-2-P.png
08-2-1-P.png
08-2-2-P.png
08-2-3-P.png
08-7-1-P.png
08-7-2-P.png
08-7-3-P.png
08-7-4-P.png
08-9-1-P.png
08-9-2-P.png

I want to join the files together as follows. For instance, the files 06-7-1-B.png and 06-7-2.png should be joined together and renamed as 06-7-B.png.

How can you create one file either png or pdf from many png -files in terminal?

From stackoverflow
  • Can you elaborate on how you want them "joined together"?

    Do you just want to append the bits from one file to the next file? Or do you want to create an animated PNG?

  • You need to be clearer in what you mean by "joining" two images, it is not obvious. You could mean anything from creating a montage, i.e. "glueing" the images together side-by-side, to some kind of "composition" where the images are stacked on top of each other, perhaps using alpha-channel transparency to let lower layers shine through.

    For many of these, you should look into ImageMagick as a way of doing it from the command line.

    Masi : @unwind: I mean to put one image at the bottom of another image such that the image with smallest sheet number for a given exercise is at the top.
  • You can do this with the Netpbm tools. Netpbm is a collection of small quick tools meant to be used from the command line (or scripts). There are many tools, each with its own purpose, and they can be chained together with pipes to do complex operations. See the documentation.

    There's also ImageMagick, but in most cases Netpbm will run more quickly and use less memory. If you have smallish images this won't be a concern.

  • Try ImageMagick (as suggested by unwind):

    montage +frame +shadow +label -geometry +0+0 -tile 2x1 06-7-1-B.png 06-7-2.png 06-7-B.png

how does one get a count of rows in a datastore model in google appengine?

I need to get a count of records for a particular Model on app engine. How does one do it?

I bulk uploaded more than 4000 records but modelname.count() only shows me 1000.

From stackoverflow
  • As of release 1.3.6, there is no longer a cap of 1,000 on count queries. Thus you can do the following to get a count beyond 1,000:

    count = modelname.all(keys_only=True).count()
    

    This will count all of your entities, which could be rather slow if you have a large number of entities. As a result, you should consider calling count() with some limit specified:

    count = modelname.all(keys_only=True).count(some_upper_bound_suitable_for_you)
    
    a paid nerd : Of course, you only have 300ms to do this :(
    Nick Johnson : 300ms? Where did you get that figure? Regardless, though, this demonstrates why counting objects on the fly is not a good idea.
    dar : My guess is he meant 30,000ms. But that really isn't the case because if you're doing this because of the bulk uploader, you probably just run the count over the remote_api anyway - which AFAIK is not subject to the 30 second timeout.
    Shay Erlichmen : I added keys_only=True which is faster
  • In GAE a count will always make you page through the results when you have more than 1000 objects. The easiest way to deal with this problem is to add a counter property to your model or to a different counters table and update it every time you create a new object.

Column.DbType affecting runtime behavior

Hi

According to the MSDN docs, the DbType property/attribute of a Column type/element is only used for database creation.

Yet, today, when trying to submit data to an IMAGE column on a SQLCE database (not sure if only on CE), I got an exception of 'Data truncated to 8000 bytes'. This was due to the DbType still being defined as VARBINARY(MAX) which SQLCE does not support. Changing the type to IMAGE in the DbType fixes the issue.

So what other surprises does Linq2SQL attributes hold in store? Is this a bug or intended? Should I report it to MS?

UPDATE

After getting the answer from Guffa, I tested it, but it seems for NVARCHAR(10) adding a 11 char length string causes a SQL exception, and not Linq2SQL one.

The data was truncated while converting from one data type to another. 
     [ Name of function(if known) =  ]
A first chance exception of type 'System.Data.SqlServerCe.SqlCeException' 
     occurred in System.Data.SqlServerCe.dll
From stackoverflow
  • It certainly sounds like the MSDN article could be misleading... however, LINQ-to-SQL, while still alive, isn't getting vast amounts of dev time, so I wouldn't hold my breath waiting for an update.

    You could post on connect, or perhaps add a remark to the MSDN page (Community Content).

    leppie : Thanks Marc :) I will continue to use and abuse Linq2SQL as long as I do not have to write SQL.
  • The DbType is only required if you are going to create a table, but that doesn't mean that it's ignored the rest of the time.

    If you for example define a VarChar column with the size 100, you will get an exception if you send a string that is longer than 100 characters even if the field in the database actually could accomodate the string.

    The documentation says that you shouldn't specify the DbType if it's not needed, as the data type is inferred from the value that you use. However, there might be some situations where you don't want it to use the DbType that is inferred.

    leppie : Thanks, I didnt know that, but I swear I have never seen that behavior with VARCHAR. Will go check again :) Perhaps it was a L2S exception and not a SQL one.
    leppie : That does not work :( Just tried it, getting SQL exception.
    Guffa : Yes, the exception comes from ADO.NET, not from LINQ to SQL, as it's ADO.NET that handles the type checking.

Why are all files in AnkhSVN solution marked as new after installing TortoiseSVN?

After installing TortoiseSVN 1.6.0, all files loaded into an AnkhSVN enabled Visual Studio 2008 project are marked as new (blue +).

I have tried re-installing AnkhSVN 2.0.6347 and checkout the repository into a new "clean" folder. However neither seem to resolve the problem.

My question is kind of two fold, is there a resolution to this issue and if there is an in-compatibility between the two products (i.e. different SVN bindings?) is there a way to tell which combination of AnkhSVN and TortoiseSVN are going to play nice with each other.

From stackoverflow
  • I noticed this too a couple of days ago. This happened because Tortoise converted your working copy to 1.6 version and Ankh doesn't know how to read it.

    The solution is simple: I installed the latest daily build of Ankh (http://ankhsvn.open.collab.net/daily/) and now everything works like a charm.

    Richard Slater : Don't know why I didn't think of this, thanks for the link and making my life that bit easier.
  • I can verify that this works, too.

    One thing that I ran into was that I uninstalled the older version before installing the new daily build version (I installed AnkhSvn-Daily-2.1.6649.29.msi), but once I installed the new version I didn't have any source code control integration whatsoever in Visual Studio 2008!

    As it turned out, when you uninstall AnkhSVN your source code control provider gets set to "None", and you have to go to "Tools | Options | Source Control" and set it back to AnkhSVN.

    Once you do that, you're good to go with Visual Studio integration and TortoiseSVN 1.6.

  • Thanks, it worked perfectly.

  • Sadly this solution does not work for anyone using VS2003 since there is no daily build (SVN 1.6) for the 1.x branch of AnkhSVN.

    The AnkhSVN team has stated that they are too busy right now to update AnkhSVN 1.x (ie the VS2002/VS2003 version), so it looks like unless someone is willing to help them with it (it’s open-source), there may not be an SVN1.6 version for VS2003 for a while.

How do you pass a reference when using a typename as a function argument in C++?

Hi,

I have some weird problem with templates. When trying to pass a parameterised iterator it complains that no function can be found. The code snippet is here, forget about the functionality, it's using the reference to the templatized iterator what interests me

#include <list>
#include <iostream>

template<typename T>
void print_list_element(typename std::list<T>::iterator& it){
    std::cout << *it << std::endl;
}

int main() {
    std::list<int> theList;
    theList.push_back(1);

    std::list<int>::iterator it = theList.begin();
    print_list_element(it);

    return 0;
}

If you try to compile this with g++ v4.3.2 it complains with a message saying that:

main.cpp:14: error: no matching function for call to 'print_list_element(std::_List_iterator<int>&)'

Is there something wrong with the code I wrote or is that g++ needs more information?

From stackoverflow
  • g++ can't figure out which template overload of print_list_element it should use. If you explicitly specify the template parameter it works:

    print_list_element<int>(it);
    
  • That is illegal because std::list< T >::iterator is not what the standard calls a proper deduced context

    Richard Corden : I think the appropriate part of the '03 standard is: 14.8.2.4/4. Also the standard says that this is a "non-deduced context", rather than your wording above.
  • The other responses are correct, but for completeness I'll just add that, by design, C++ can only deduce template arguments automatically in certain cases, and this isn't one of them.

    When you think about it, you'll realise that automatic deduction in this case would lead to an undesirable situation. std::list<T>::iterator is not a real type, it's just a typedef alias for a real type (e.g. it might be T*) to which it is immediately translated, so the compiler would have to build some sort of "reverse index" in order to map T* back to std::list<T>::iterator for automatic deduction of T to work here. But this mapping would break as soon as another class template was created that had a type member called iterator that was typedefed to T* -- then the compiler would have two choices of what to translate T* to, and no way to choose between them. Clearly, any automatic deduction policy that breaks when an unrelated class adds a particular typedef type member is much too fragile to work.

  • A better way is to write the function like this:

    template<typename Iter>
    void print_element(Iter it){
        std::cout << *it << std::endl;
    }
    

    This will now work for any type of iterator, not just std::list<T>::iterator. Also, the template type will be deduced correctly from the argument.

    I realize that it was a contrived example, but almost always you probably didnt want to pass in list<T>::iterator to a function anyways. At worst, at least template on ListType so that your code would work with lists with custom allocators.

    j_random_hacker : +1. This will even work for plain arrays. If for some reason you really want it to work *only* for std::list, look up Boost's enable_if<> template.

Avoid Re-initialization of Critical Section

Exact Duplicate

Initialize Critical Section only once for a process

I have a dll that creates a global critical section, initializes and use it.

Now a third party application is using / loading the dll more than once which leads to a heap corruption.

The appverifier warns me with a

--> VERIFIER STOP 00000211: pid 0x1470: Critical section is already initialized. <--

Using a global flag to check if the critical section object is already initialized doesn't help, any ideas on accomplishing the same ?

Thanks

From stackoverflow
  • Have you looked at this question for a possible answer?

    http://stackoverflow.com/questions/724560/initialize-critical-section-only-once-for-a-process

  • Now a third party application is using / loading the dll more than once

    Windows does not reload the same DLL multiple times. Where same means same path. If the 3rd party app is loading from different locations, that is the problem.

    Greg Domjan : Windows can relaod the same DLL multiple times. DLLs are reference counted. You also have calls for process and thread start stop. If the loading app frees the library and loads it again this is entirely possible. It indicates no cleanup code on the critical section when the dll is freed.
    Richard : The DLL will not be reloaded with another LoadLibrary call, the reference will be incremented, but DllMain will *not* be called.
  • May be using the "setAtom" and "getAtom" APIs help? I know they are a bit "old-school" but you can never know.

SQL CLR Stored Procedure and Web Service

I am current working on a task in which I am needing to call a method in a web service from a CLR stored procedure.

A bit of background:

Basically, I have a task that requires ALOT of crunching. If done strictly in SQL, it takes somewhere around 30-45 mins to process. If I pull the same process into code, I can get it complete in seconds due to being able to optimize the processing so much more efficiently. The only problem is that I have to have this process set as an automated task in SQL Server.

In that vein, I have exposed the process as a web service (I use it for other things as well) and want the SQL CLR sproc to consume the service and execute the code. This allows me to have my automated task.

The problem:

I have read quite a few different topics regarding how to consume a web service in a CLR Sproc and have done so effectivly. Here is an example of what I have followed.

http://blog.hoegaerden.be/2008/11/11/calling-a-web-service-from-sql-server-2005/

I can get this example working without any issues. However, whenever I pair this process w/ a Web Service method that involves a database call, I get the following exceptions (depending upon whether or not I wrap in a try / catch):

Msg 10312, Level 16, State 49, Procedure usp_CLRRunDirectSimulationAndWriteResults, Line 0 .NET Framework execution was aborted. The UDP/UDF/UDT did not revert thread token.

or

Msg 6522, Level 16, State 1, Procedure MyStoredProc , Line 0

A .NET Framework error occurred during execution of user defined routine or aggregate 'MyStoredProc':

System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.EnvironmentPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.

System.Security.SecurityException:

at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet)

at System.Security.CodeAccessPermission.Demand()

at System.Net.CredentialCache.get_DefaultCredentials()

at System.Web.Services.Protocols.WebClientProtocol.set_UseDefaultCredentials(Boolean value)

at MyStoredProc.localhost.MPWebService.set_UseDefaultCredentials(Boolean Value)

at MyStoredProclocalhost.MPWebService..ctor()

at MyStoredProc.StoredProcedures.MyStoredProc(String FromPostCode, String ToPostCode)

I am sure this is a permission issue, but I can't, for the life of me get it working. I have attempted using impersonation in the CLR sproc and a few other things. Any suggestions? What am I missing?

From stackoverflow
  • I am just figuring this out, but this is what I got:

    You need to set the permission_set when you create the assembly in SQL to UNSAFE.

    CREATE ASSEMBLY [<assemblyname>] FROM '<path>/<assemblyname>.dll'
    WITH PERMISSION_SET = UNSAFE
    GO
    

    If you are using a VS SQL Database Project to do your deployment, this can also be done by setting the Permission to UNSAFE on the Database Tab of Properties for the Database Project.

  • I have my CLR procedure consuming my web service just fine but what if i'm on a network that requires the webservice to ask for a certificate? How would I handle that?

Does the SaveAs method in Microsoft.Office.Outlook.Interop have a maximum file size?

Is there some type of undocumented file size limit when using this method to save to a UNC path?

http://msdn.microsoft.com/en-us/library/microsoft.office.interop.outlook._mailitem.saveas.aspx

I made an Outlook add-on that copies the currently selected email(s) to a network server. Works great until you try to save an email that has many attachments that total ~10mb. It works fine with an email that has a single attachment that is the same size.

From stackoverflow
  • Hi,

    How are you saing your email ?

    olHTML, olMSG, olRTF etc?

    76mel

How do you create a /postResults servlet for selenium core.

I looked at the documentation and all it says is to create a servlet... With what?

Is there code I need to use for this servlet?

Does it just need to be blank and have the name of postResults?

Is there a provided ant script for this?

I can't find anything on google or selenium's site that lets me in on this.

Thanks

UPDATE: I found the following example

       <servlet>
        <servlet-name>postResults</servlet-name>
        <servlet-class>com.thoughtworks.selenium.results.servlet.SeleniumResultsServlet</servlet-class>

        <load-on-startup>1</load-on-startup>
    </servlet>

    <servlet-mapping>
        <servlet-name>postResults</servlet-name>
        <url-pattern>/postResults</url-pattern>
    </servlet-mapping>

However I can't seem to find this Class file anywhere in my selenium jars. I have the RC and regular core downlaods but no dice. Where do I get this jar file from?

From stackoverflow
  • The postResults servlet is useful in a continuous integration environment where you want to have the selenium test results sent to a URL of your choosing ( I believe it's configurable when setting up your selenium test ) and then have that server include the selenium results as part of the build results. If you don't want to do any post-processing on the selenium test results, then you don't have to setup a postResults servlet at all.

    mugafuga : This does not answer my question at all
    digitaljoel : Sorry, you seemed confused as to the purpose of the servlet, so I was just pointing out that you don't need to create a servlet if you don't have any plans to use the results in the servlet, which is what it sounds like when you ask about creating an empty one. My apologies.
    mugafuga : Seleniums results pane isn't comprehensive so you have to have a results servlet jet there is no developed easy to implement out of the zip file solution that comes with selenium. So they make you come up with it.
  • If you are using the pure html/javascript capabilities of Selenium like I am then you know that you do not get a results report when testing unless you have a postResults servlet setup somewhere to push the results to.

    I found a solution by taking apart the fitRunner plug-in to determine what I would need to get one setup.

    This is a java solution btw.

    http://jira.openqa.org/browse/SEL-102 you can download a zip file here with everything you would need and a bunch of stuff you don't need.

    In your webapp just add the servlet mapping you find in the web.xml to your web app. make sure the package your reference is created as such below

    Then add the following jars you will find in the zip to your web app library if you don't already have them.

    jstl.jar and standard.jar

    create two classes your.package.path.SeleniumResultServlet

    paste the following code in it.

    package com.your.package.path;
    
    import java.io.IOException;
    import java.util.Collection;
    import java.util.LinkedList;
    import java.util.List;
    
    import javax.servlet.ServletException;
    import javax.servlet.ServletOutputStream;
    import javax.servlet.http.HttpServlet;
    import javax.servlet.http.HttpServletRequest;
    import javax.servlet.http.HttpServletResponse;
    
    public class SeleniumResultsServlet extends HttpServlet {
    
        private static TestResults results;
    
        public static TestResults getResults() {
            return results;
        }
    
        public static void setResults(TestResults testResults) {
            results = testResults;
        }
    
        protected void doGet(HttpServletRequest request,
          HttpServletResponse response) throws ServletException, IOException {
    
         if (request.getParameter("clear") != null) {
          results = null;
          ServletOutputStream out = response.getOutputStream();
                out.println("selenium results cleared!");
         } else {
          forwardToResultsPage(request, response);
         }
        }
    
        protected void doPost(HttpServletRequest request, HttpServletResponse response) 
                                                    throws ServletException, IOException {
            String result = request.getParameter("result");
            String totalTime = request.getParameter("totalTime");
            String numTestPasses = request.getParameter("numTestPasses");
            String numTestFailures = request.getParameter("numTestFailures");
            String numCommandPasses = request.getParameter("numCommandPasses");
            String numCommandFailures = request.getParameter("numCommandFailures");
            String numCommandErrors = request.getParameter("numCommandErrors");
            String suite = request.getParameter("suite");
    
         int numTotalTests = Integer.parseInt(numTestPasses) + Integer.parseInt(numTestFailures);
    
         List testTables = createTestTables(request, numTotalTests);
    
         results = new TestResults(result, totalTime,
           numTestPasses, numTestFailures, numCommandPasses,
           numCommandFailures, numCommandErrors, suite, testTables);
    
            forwardToResultsPage(request, response);
        }
    
        private List createTestTables(HttpServletRequest request, int numTotalTests) {
         List testTables = new LinkedList();
         for (int i = 1; i <= numTotalTests; i++) {
                String testTable = request.getParameter("testTable." + i);
                System.out.println("table " + i);
                System.out.println(testTable);
                testTables.add(testTable);
            }
         return testTables;
        }
    
    
        private void forwardToResultsPage(HttpServletRequest request, HttpServletResponse response)
                throws ServletException, IOException {
            request.setAttribute("results", results);
            request.getRequestDispatcher("/WEB-INF/jsp/viewResults.jsp").forward(request, response);
        }
    
        public static class TestResults {
    
            private final String result;
            private final String totalTime;
            private final String numTestPasses;
            private final String numTestFailures;
            private final String numCommandPasses;
         private final String numCommandFailures;
            private final String numCommandErrors;
            private final String suite;
    
         private final List testTables;
    
            public TestResults(String postedResult, String postedTotalTime, 
                    String postedNumTestPasses, String postedNumTestFailures, 
                    String postedNumCommandPasses, String postedNumCommandFailures, 
                    String postedNumCommandErrors, String postedSuite, List postedTestTables) {
    
                result = postedResult;
                numCommandFailures = postedNumCommandFailures;
                numCommandErrors = postedNumCommandErrors;
                suite = postedSuite;
                totalTime = postedTotalTime;
                numTestPasses = postedNumTestPasses;
                numTestFailures = postedNumTestFailures;
                numCommandPasses = postedNumCommandPasses;
          testTables = postedTestTables;
            }
    
            public String getDecodedTestSuite() {
                return new UrlDecoder().decode(suite);
            }
    
            public List getDecodedTestTables() {
                return new UrlDecoder().decodeListOfStrings(testTables);
            }
    
            public String getResult() {
                return result;
            }
            public String getNumCommandErrors() {
                return numCommandErrors;
            }
            public String getNumCommandFailures() {
                return numCommandFailures;
            }
            public String getNumCommandPasses() {
                return numCommandPasses;
            }
            public String getNumTestFailures() {
                return numTestFailures;
            }
            public String getNumTestPasses() {
                return numTestPasses;
            }
            public String getSuite() {
                return suite;
            }
            public Collection getTestTables() {
                return testTables;
            }
            public String getTotalTime() {
                return totalTime;
            }
         public int getNumTotalTests() {
          return Integer.parseInt(numTestPasses) + Integer.parseInt(numTestFailures);
         }
        }
    }
    

    then go ahead and create a UrlDecoder class in the same package

     package your.package.path;
    import java.io.UnsupportedEncodingException;import java.net.URLDecoder;
        import java.util.Iterator;
        import java.util.LinkedList;
        import java.util.List;
    
        /**
         * @author Darren Cotterill
         * @author Ajit George
         * @version $Revision: $
         */
        public class UrlDecoder {
    
            public String decode(String string) {
                try {
                    return URLDecoder.decode(string, System.getProperty("file.encoding"));
                } catch (UnsupportedEncodingException e) {
                    return string;
                }
            }
    
            public List decodeListOfStrings(List list) {
                List decodedList = new LinkedList();
    
                for (Iterator i = list.iterator(); i.hasNext();) {
                    decodedList.add(decode((String) i.next()));
                }
    
                return decodedList;
            }
    
    }
    

    In your web-inf create a folder called jsp

    copy the viewResults.jsp into it that is in the zip file. copy the c.tld to the /web-inf

    restart your server

    if you get some goofy errors when trying to load the postResults servlet from selenium try changing the taglib reference int the viewResults.jsp to use the sun url instead due to different version dependencies it may not work. servlet 1.0 vs. 2.0 stuff.

    <%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
    

    Then when you run the test runner and choose automatic option in your selenium screen you will have a postResults servlet that you can use and customize at will.

    Hope this helps others. Having a way to format test results is a great help when trying to create documentation and the stuff that comes out of the zip with regular selenium doesn't give you this basic functionality and you have to bake it up yourself.