DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Curious about the future of data-driven systems? Join our Data Engineering roundtable and learn how to build scalable data platforms.

Data Engineering: The industry has come a long way from organizing unstructured data to adopting today's modern data pipelines. See how.

Threat Detection: Learn core practices for managing security risks and vulnerabilities in your organization — don't regret those threats!

Managing API integrations: Assess your use case and needs — plus learn patterns for the design, build, and maintenance of your integrations.

Avatar

Ricci Gian Maria

Consultant at Self empolyee

Sassoferrato, IT

Joined Sep 2008

http://www.codewrecks.com/blog

About

I'm an independent consultant working in Italy with more than 14 years of experience in designing and developing application for .NET framework, both in windows and Web environments. I'm particularly involved in Continuous Integration strategies, and designing infrastructure of applications, since I'm a lover of pattern and Application Lifecycle Management. I'm a great fan of Communities and I'm a co-founder of DotNetMarche, an Italian community focused on .NET development and I love blogging about technology.

Stats

Reputation: 1128
Pageviews: 933.1K
Articles: 22
Comments: 5
  • Articles
  • Comments

Articles

article thumbnail
Azure DevOps Agent With Docker Compose
Learn more about using Docker for Azure DevOps Linux Build Agent with Docker Compose.
January 7, 2020
· 21,113 Views · 4 Likes
article thumbnail
Converting a Big Project to .NET Standard Without a Big Bang
Avoid complications when converting .NET Full Framework to .NET Standard.
August 8, 2019
· 11,096 Views · 1 Like
article thumbnail
How to Configure Visual Studio as Diff and Merge Tool for Git
Learn more about configuring Visual Studio for Git.
July 31, 2019
· 38,880 Views · 3 Likes
article thumbnail
How to Edit a YAML Azure DevOps Pipeline
If you aren't convinced that having your build definition as code rather than on a server is better, IntelliSense and Azure DevOps might change your mind.
May 28, 2019
· 24,210 Views · 1 Like
article thumbnail
Error Publishing .NET Core App in Azure DevOps YAML Build
A dev quickly walks us through an error he received when working in .NET Core and how he fixed it with a little YAML.
May 6, 2019
· 5,808 Views · 1 Like
article thumbnail
Build and Deploy an ASP.NET App With Azure DevOps
We look at how to create a DevOps friendly web application in ASP.NET, taking advantage of the framework's built-in features.
April 8, 2019
· 28,806 Views · 1 Like
article thumbnail
WIQL Editor Extension for Azure DevOps
Check out one of our favorite add-ins for Azure DevOps — the Work Item Query Language Editor.
February 19, 2019
· 12,829 Views · 1 Like
article thumbnail
Git and The Hell of Case Sensitivity
CASING is ImPoRtAnT. Especially in git, where using the incorrect casing could become a real source of irritation to developers.
January 24, 2019
· 19,406 Views · 3 Likes
article thumbnail
Converting PowerShell Tasks in YAML
YAML builds allow you to save time and automate more. Let's explore the best wats to use PowerShell tasks in YAML to expedite your workflows.
August 17, 2018
· 12,089 Views · 1 Like
article thumbnail
Hyper-V and Windows AutoLogon
The ability for build and release agents to automatically login in order to complete UI-interacting integration tests is important. Windows developers, read on.
Updated July 22, 2017
· 8,132 Views · 1 Like
article thumbnail
Why You Should Optimize Your Local Git Repository From Time to Time
With SSD for Git, you can use GIt without any performance problems — even when you've got some large repositories. Read on to find out how.
June 23, 2017
· 10,143 Views · 4 Likes
article thumbnail
Long Numbers Are Truncated in MongoDB Shell
MongoDB is notorious for it's quirks; in this short read, MVB Ricci Gian Maria explains how he solved this problem with the popular document store.
June 6, 2016
· 10,874 Views · 4 Likes
article thumbnail
Installing SonarQube on Windows and SQL Server
Trying to install SonarQube on Windows and SQL Server? Here's a tutorial to get you started.
November 3, 2015
· 17,372 Views · 5 Likes
article thumbnail
Git for Windows, Getting Invalid Username or Password with Wincred
if you use https to communicate with your git repository, es, github or visualstudioonline, you usually setup credential manager to avoid entering credential for each command that contact the server. with latest versions of git you can configure wincred with this simple command. git config --global credential.helper wincred this morning i start getting error while i’m trying to push some commits to github. $ git push remote: invalid username or password. fatal: authentication failed for 'https://github.com/proximosrl/jarvis.documents tore.git/' if i remove credential helper (git config –global credential.helper unset) everything works, git ask me for user name and password and i’m able to do everything, but as soon as i re-enable credential helper, the error returned. this problem is probably originated by some corruption of stored credentials, and usually you can simply clear stored credentials and at the next operation you will be prompted for credentials and everything starts worked again. the question is, where are stored credential for wincred? if you use wincred for credential.helper, git is storing your credentials in standard windows credential manager you can simply open credential manager on your computer, figure 1: credential manager in your control panel settings opening credential manager you can manage windows and web credentials. now simply have a look to both web credentials and windows credentials, and delete everything related to github or the server you are using. the next time you issue a git command that requires authentication, you will be prompted for credentials again and the credentials will be stored again in the store. gian maria.
June 23, 2015
· 20,228 Views
article thumbnail
How to Deal with Slow Unit Tests with Visual Studio Test Runner
one of the most dreadful problem of unit testing is slow testing. if your whole suite of tests runs in 10 minutes, it is normal for developers not to run the whole suite at each build. one of the most common question is how can i deal with slow unit tests? here is my actual scenario: in a project i’m working in, we have some multilingual full text search done in elastic search and we have a battery of unit tests that verify that searches work as expected. since each test deletes all documents, insert a bunch of new documents and finally commits lucene index, execution times is high compared to the rest of tests. each test need almost 2 seconds to run on my workstation, where i have really fast ssd and plenty of ram. this kind of tests cannot be run in memory or with some fancy trick to make then run quickly. actually we have about 30 tests that executes in less than one seconds, and another 13 tests that runs in about 23 seconds, this is clearly unacceptable . after few hours of work, we already reached the point where running the whole suite becomes annoying. the solution this is a real common problem and it is quite simple to fix. first of all visual studio test runner actually tells you execution time for each unit test, so you can immediately spot slow tests. when you identify slow tests you can mark them with a specific category, i use slowtest 1 2 3 4 [testfixture] [category("elasticsearch")] [category("slowtest")] public class essearcherfixture : basetestfixturewithhelpers since i know in advance that this test are slow i immediately mark the entire class with the attribute slowtest. if you have no idea what of your tests are slow, i suggest grouping test by duration in visual studio test runner. figure 1: group tests by duration the result is interesting, because visual studio consider every test that needs more than one second to be slow. i tend to agree with this distinction. figure 2: test are now grouped by duration this permits you to immediately spot slow tests, so you can add the category slowtest to them. if you keep your unit tests organized and with a good usage of categories, you can simply ask vs test runner to exclude slow test with filter – traits:”slowtest” figure 3: thanks to filtering i can now execute continuously only test that are not slow. i suggest you to do a periodic check to verify that every developers is using the slowtest category wisely, just group by duration, filters out the slowtest and you should not have no tests that are marked slow. figure 4: removing the slowtest category and grouping by duration should list no slow test. the nice part is that i’m using nunit, because visual studio test runner supports many unit tests frameworks thanks to the concepts of test adapters. if you keep your tests well organized you will gain maximum benefit from them :).
July 4, 2014
· 17,547 Views
article thumbnail
Git Showing File as Modified Even if It Is Unchanged
This is one annoying problem that happens sometimes to git users: the symptom is: git status command shows you some files as modified (you are sure that you had not modified that files), you revert all changes with a git checkout — . but the files stills are in modified state if you issue another git status. This is a real annoying problem, suppose you want to switch branch with git checkout branchname, you will find that git does not allow you to switch because of uncommitted changes. This problem is likely caused by the end-of-line normalization (I strongly suggest you to read all the details in Pro Git book or read the help of github). I do not want to enter into details of this feature, but I only want to help people to diagnose and avoid this kind of problem. To understand if you really have a Line Ending Issue you should run git diff -w command to verify what is really changed in files that git as modified with git status command. The -w options tells git to ignore whitespace and line endings, if this command shows no differences, you are probably victim of problem in Line Ending Normalization. This is especially true if you are working with git svn, connecting to a subversion repository where developers did not pay attention to line endings and it happens usually when you have files with mixed CRLF / CR / LF. If you work in mixed environment (Unix/Linux, Windows, Macintosh) it is better to find files that are listed as modified and manually (or with some tool) normalize Line Endings. If you do not work in mixed environment you can simply turn off eol normalizationfor the single repository where you experience the problem. To do this you can issue a git config –local core.autocrlf false but it works only for you and not for all the other developers that works to the project. Moreover some people reports that they still have problem even with core.autocrlf to false. Remember that git supports .gitattributes files, used to change settings for a single subdirectory. If you set core.autocrlf to false and still have line ending normalization problem, please search for .gitattribuges files in every subdirectory of your repository, and verify if it has a line where autocrlf is turned on: * text=auto now you can turn off in all .gitattributes files you find in your repository * text=off To be sure that every developer of the team works with autocrlf turned off, you should place a .gitattributes file in repository root with autocrlf turned off. Remember that it is a better option to normalize files and leave autocrlf turned on, but if you are working with legacy code imported from another VCS, or you work with git svn, git-tf or similar tools, probably it is better turn autocrlf to off if you start experiencing that kind of problems.
April 29, 2014
· 89,072 Views
article thumbnail
Relations with not-found="ignore"
NHibernate has a lot of interesting and specific option for mapping entities that can really cover every scenario you have in mind, but you need to be aware of every implication each advanced option has on performances. If you are in a legacy-database scenario where entity A reference Entity B, but someone outside the control of NHibernate can delete record from table used by Entity B, without setting the corresponding referencing field on Entity A. We will end with a Database with broken reference, where rows from Table A references with a field id a record in Table B that no longer exists. When this happens, if you load an Entity of type A that reference an Entity of type B that was deleted, it will throw an exception if you try to access navigation property, because NHibernate cannot find related entity in the Database. If you know NHibernate you can use the not-found=”Ignore” mapping option, that basically tells NHibernate to ignore a broken reference key, if EntityA references an Entity B that was already deleted from database, the reference will be ignored, navigation property will be set to Null, and no exception occurs. This kind of solution is not without side effects, first of all you will find that Every time you load an Entity of Type A another query is issued to the database to verify if related Entity B is really there. This actually disable lazy load, because related entity is always selected. This is not an optimum scenario, because you will end with a lot of extra query and this happens because not-found=”ignore” is only a way to avoid a real problem: you have broken foreign-key in your database. My suggestion is, fix data in database, keep the database clean without broken foreign-keys and remove all not-found=”ignore” mapping option unless you really have no other solution. Please remember that even if you are using NHibernate, you should not forget SQL capabilities. As an example SQL Server (and quite all of the relational database in the market) has the ability to setup rules for foreign-key, es ON DELETE SET NULL that automatically set to null a foreign key on a table, when related record is deleted. Such a feature will prevent you from having broken foreign key, even if some legacy process manipulates the database deleting records without corresponding update in related foreign-key. - See more at: http://www.codewrecks.com/blog/index.php/2013/06/18/relations-with-not-foundignore-disable-lazy-load-and-impact-on-performances/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+AlkampferEng+%28Alkampfer%27s+Place%29#sthash.93db7RQX.dpuf
June 19, 2013
· 4,483 Views
article thumbnail
Highlight Matched Text in Solr + Tika Indexed Documents
i’ve already dealt on how to index documents with solr and tika and in this article i’ll explain how you can not only search for documents that match your query, but returns even some text extract that shows where the document match the query . to achieve this, you should store the full content of the document inside your index, usually i create a couple of fields, one called content that will contain the content of the file, and with a copyfield directive ( ) automatically copy that value inside the catch all field called text. text field is multivalued and not stored, it is only indexed to permit search inside various field of the document . content field store the extracted text from tikaand it is useful both for highlighting and to troubleshoot extraction problems, because it contains the exact text extracted by tika. now suppose you want to search for the term branch and want also to highlight the part of the text where you find that term , you can simply issue a query that ask for highlighting, it is really simple. http://localhost:8080/testinstance/tikacrawl/select?q=text%3abranch&fl=id%2ctitle%2cauthor&wt=xml&hl=true&hl.snippets=20&hl.fl=content&hl.usephrasehighlighter=true this simple query ask for document with text that contains word branch, i want to extract (fl=) only title and author fields, want xml format and with hl=true i’m asking for snippet of matching text, hl.snippets=20 instruct solr to search for a maximum of 20 snippet and hl.usephrasehighlighter=true use a specific highlighter that try to extract a single phrase from the text. the most important parameter is the hl.fl=content that specify the field of the document that contains the text used for highlight. in the results, after all matching documents there is a new section that contains all the highlights for each document figure 1: hilight for the tfs branching guid – scenarios 2.0.pdf file the name of the element match the id of the document (in my configuration full path of the file), and there a list of highlights that follows. but the true power of solr comes out if you start to use languages specific fields . i’ve just changed in schema.xml the type of content and text from general_text to text_en and this simple modification enables a more specific tokenizer, capable of doing full-text searches. suppose you want to know all documents that deal with branching strategies, here is a possible query http://localhost:8080/testinstance/tikacrawl/select?q=text%3a% 22branch+strategy%22~3 &fl=id%2ctitle&wt=xml&hl=true&hl.snippets=5&hl.fl=content&hl.usephrasehighlighter=true&hl.fragsize=300 the key is in the search query text:”branch strategy”~3 that states i’m interested in documents containing both branch and strategy terms and with a relative distance of no more than three words. since text was indexed with text_en field type i got full-text search, and i have a confirmation looking at the highlights. figure 2: highlights for a proximity query with full text, as you can see word branching matches even if i searched for branch and voilà!! you have full-text searching inside file content with minimal amount of work and a simple rest interface for querying the index
June 10, 2013
· 14,097 Views
article thumbnail
How to Configure diff and Merge Tool in Visual Studio Git Tools
If you are using Visual Studio plugin for Git, but you have also configured Git with MSys git, probably you could be surprised by some Visual Studio behavior.
March 20, 2013
· 74,768 Views · 1 Like
article thumbnail
MongoDB and the Concept of Identity in NoSQL Databases
in this article i deal with a different nosql database called mongodb a mature nosql engine born outside the .net world to clarify the concept of id in a typical no sql database. installation of mongo is really simple, just download, uncompress, locate the bin folder, and type this from an administrator console prompt to install mongo as service 1: mongod --install --logpath c:xxxx --dbpath c:yyyy you can find plenty of installation guide on the internet, but with the above install command you create a windows service that will automatically start mongodb on your machine using specified datafolder. now you should download the c# driver to connect from .net code, but if you like using linq you can install fluent mongo directly with nuget. figure 1: install fluent mongo with nuget. fluent mongo is a library that gave little linq capability over standard drivers, but adding a nuget reference to fluentmongo you get automatically a reference to the official drivers. now you are ready to insert your first record in mongodb with the above code. mongoserver server = mongoserver.create(); mongodatabase databasetest = server.getdatabase("test"); var untyped = databasetest.getcollection("untyped"); untyped.save(new bsondocument { { "name", "untyped1" } }); bsondocument seconddocument = bsondocument.parse("{name: 'untyped2', blabla: 'bla bla value'}"); untyped.save(seconddocument); in line1 i create a connection to mongodb server passing no parameter to connect to local mongodb server, then i obtain a reference to a mongodatabase object called “test” with mongoserver::getdatabase() method and finally i get a reference to a collection named “untyped” with the mongngodatabase::getcollection () method. this is quite similar to a sql server or other sql database, you have a server, the server contains several databases, and each database is composed by tables; in the same way mongo is divided into server/database/collection where a collection contains document. mongodb stores data in json format and to insert data inside a collection you can simply create a bsondocument, an object defined by c# driver assembly that is capable to represent a document composed by a series of key-value pair. to initialize a bsondocument you can pass a icollection (line 4) or if you feel more confortable with string json representation, you can user bsondocument.parse() to specify the document directly with a json string. after you inserted the above documents you can use mongovue to see what is contained inside the database. figure 2: use mongovue to see what is inside the database the interesting aspect is that each document has an unique id, even if i did not specified any special property in the code. this is a standard behavior for nosql databases, if you did not specify any id property the database engine will create unique id on his own to idendify the docuemnt. the id is a key factor for mongo and other nosql storage, if you try to store a document directly inside the collection specifying json content you will get an error. untyped.save("{name: 'json', attribute:'attribute content'}"); the mongocollection object contains a save object that accepts a string, but the above call will fail with the error subclass must implement getdocumentid . previous code works because one of the specific functionality that a bsondocument implements is the ability to manage id generation, but plain json does not have this capability. if you need to know the id generated by the database, you can query the bsondocument for its unique id after it was saved in a mongo collection . (remember that the id is not available until you save the document). bsondocument seconddocument = bsondocument.parse("{name: 'untyped2', blabla: 'bla bla value'}"); object id; type idtype; iidgenerator generator; untyped.save(seconddocument); seconddocument.getdocumentid(out id, out idtype, out generator); basically you are asking to your bsondocument to return you the generated id, as well as the type of the id and the generator that mongo used to generate that specific id. the result is represented into this snippet figure 3: the three object that you got with a call to getdocumentid: id, idtype and the generator. as you can see the id is an instance of type mongodb.bson.objectid, based on bsonvalue base class and the generator is an instance of objectidgenerator. this type of id is specific to mongo, and the documentation states that an objectid is a bson objectid is a 12-byte value consisting of a 4-byte timestamp (seconds since epoch), a 3-byte machine id, a 2-byte process id, and a 3-byte counter. note that the timestamp and counter fields must be stored big endian unlike the rest of bson. this is because they are compared byte-by-byte and we want to ensure a mostly increasing order. if you want to have a generator that creates integer id, like identity column in sql server , you will find that it is simply not available out of the box, because an int value is not guarantee to be unique if you use sharding. sharding is a technique that permits to partition data into different physical instances, so each instance should generates ids that are unique across all instances and this prevents the use of a simple int32 id. clearly in .net world a guid is guarantee to be unique and is more .net oriented, so mongo db has a guid id generator, that can be specified with the above snippet of code.. bsondocument thirddocument = bsondocument.parse("{name: 'untyped3', anotherproperty: 'xxxxxxxxxxxxxxxxxxxxxxx'}"); var id2 = mongodb.bson.serialization.idgenerators.guidgenerator.instance.generateid(untyped, thirddocument); thirddocument.setdocumentid(id2); the key is using the guidgenerator (in the mongodb.bson.serialization.idgenerators namespace) to generate a valid mongoid guid value, then call the setdocumentid method of bsondocument to manually set the id and not relay on automatic id generation. if you look at the db you will find that the document with guid id has really a different id type. figure 4: the document with guid id is represented in a different way in mongovue, but as you can verify there is no problem in having documents with different id types in the same collection. this demonstrates that a no sql database has a concept of document id that is similar to the concept of id of a standard sql server, you can use a native id generation of the engine that generates a valid id during insertion or you can assign your own id to the document, but basically the whole concept of id is more engine-related and has no business meaning, so i strongly discourage to use anything that has a business meaning as id of a document. noone prevents you to insert in the document a property called “myid” or something else that has a business meaning and can be used as logical id and let the engine handle the internal id by itself.
December 26, 2012
· 7,668 Views
article thumbnail
How to Add an Existing Project to TFS
this is a super basic and easy question, but i found quite often people asking me how to add an existing project to a tfs team project. it turns out that there are more than one way of doing this, but i usually suggests this simple path that is quite simple and is understandable from people that comes from the subversion world. first of all be sure that you have a valid workspace in your computer that maps the folder in the team project where you want to put your existing project and issue a getlatest . if the team project is new you can simply create a workspace going to menu file –> source control –> workspaces, press add and create a workspace that maps the source to a folder on your pc. figure 1: creating a workspace that maps the whole folder of the source control. now simply go to the local folder and move the existing solution from the original location to the mapped folder, then go to the team explorer –> source control and manually add all the files to tfs source control. in figure2 i represented the process to accomplish this task, first of all select source control, press the add button then visual studio presents you all the files that are in your local folder but are not present in the source control (they are the candidate to be added to the source control). figure 2: adding existing file to source control now you are presented with a list of all the files that will be added to tfs source control, as seen in figure 3 figure 3: list of files that gets added to the source control. as you can see some of the files are excluded (22 in figure 3), this happens because visual studio already knows that certain types of file should be excluded from source control, like all the bin and debug folders and all *.dll files . if you have some lib folder where you store third party library, you can go to the excluded items tab in figure 3 to add manually all the excluded files you want to be added to the source control. since this window is usually cluttered with all the bin and obj directory, i found simpler to: 1) in a first pass add all the file suggested by visual studio 2) browse to lib directory (or whatever folder contains third party library) and add explicitly all the files in the directory. now all files were in pending-add, this means that they will be sent to the source control with a check-in, but before the check-in phase you should open solution file from the source control explorer windows. after the solution is opened visual studio should tell you that this solution is in a source control monitored folder, but source control is not enabled this basically means that, source files are correctly linked to the source control system, but the visual studio integration is not enabled, you can simply press yes and let visual studio actually do binding between projects and source control . if the binding does not happens automatically you will get the windows of figure 4, (you can always open this windows with the menu file –> source control –> change source control. in this window you can simply select one project at a time and press the bind button to perform the bind, until all the project files are in connected status. figure 4: binding windows where you can bind solution and projects file to tfs now you can check-in everything and usually you should create another workspace or ask to another member of the team to do a get-latest of the just-inserted solution, to verify that all the needed files were correctly added to the source control. gian maria. source: http://www.codewrecks.com/blog/index.php/2012/01/27/add-existing-project-to-tfs
February 9, 2012
· 116,395 Views · 1 Like
article thumbnail
Disable Javascript error in WPF WebBrowser control
I work with WebBrowser control in WPF, and one of the most annoying problems I have with it, is that sometimes you browse sites that raise a lot of javascript errors and the control becomes unusable. Thanks to my friend Marco Campi, yesterday I solved the problem. Marco pointed me a link that does not deal with WebBrowser control, but uses a simple javascript script to disable error handling in a web page. This solution is really simple, and seems to me the right way to solve the problem. The key to the solution is handle the Navigated event raised from the WebBrowser control. First of all I have my WebBrowser control wrapped in a custom class to add functionalities, in that class I declare this constant. private const string DisableScriptError = @"function noError() { return true; } window.onerror = noError;"; This is the very same script of the previous article, then I handle the Navigated event. void browser_Navigated(object sender, System.Windows.Navigation.NavigationEventArgs e) { InjectDisableScript(); Actually I’m doing a lot of other things inside the Navigated event handler, but the very first one is injecting into the page the script that disable javascript error. private void InjectDisableScript() { HTMLDocumentClass doc = Browser.Document as HTMLDocumentClass; HTMLDocument doc2 = Browser.Document as HTMLDocument; //Questo crea lo script per la soprressione degli errori IHTMLScriptElement scriptErrorSuppressed = (IHTMLScriptElement)doc2.createElement("SCRIPT"); scriptErrorSuppressed.type = "text/javascript"; scriptErrorSuppressed.text = DisableScriptError; IHTMLElementCollection nodes = doc.getElementsByTagName("head"); foreach (IHTMLElement elem in nodes) { //Appendo lo script all'head cosi è attivo HTMLHeadElementClass head = (HTMLHeadElementClass)elem; head.appendChild((IHTMLDOMNode)scriptErrorSuppressed); } } This is the code that really solves the problem, the key is creating a IHTMLScriptElement with the script and injecting into the Head of the page, this effectively disables the javascript errors. I’ve not fully tested with a lot of sites to verify that is able to intercept all errors, but it seems to work very well with a lot of links that gave us a lot of problems in the past. Alk.
October 5, 2010
· 20,246 Views

Comments

Simple Templates Using PHP - Make Your Own Templates

Mar 05, 2013 · Tony Thomas

You are right, I've done the test after long time, and the default setting of Crystal were changed, I replay the same test 2x500 and the result are quite the same (some numbers are even higher on Vertex4 with 500MB test file).

For the stability, I own a ocz vertex 2, three year runs on three different machine and never had problem, vertex 3 and 4 are stable and runs quite 14 hours a day for months. Ocz first firmware for Vertex 2 has really lots of problem, but now it is really stable.

Generally speaking 256 GB are really faster than 128, and you gain also difference in performance between generations, even if you fail to notice in standard usage. Since I'm mainly a developer, it is not useful to understand what happens with big file (10 GB), I'm interested in random load and write (database, compiling, etc). :)

Alk.

Simple Templates Using PHP - Make Your Own Templates

Mar 05, 2013 · Tony Thomas

You are right, I've done the test after long time, and the default setting of Crystal were changed, I replay the same test 2x500 and the result are quite the same (some numbers are even higher on Vertex4 with 500MB test file).

For the stability, I own a ocz vertex 2, three year runs on three different machine and never had problem, vertex 3 and 4 are stable and runs quite 14 hours a day for months. Ocz first firmware for Vertex 2 has really lots of problem, but now it is really stable.

Generally speaking 256 GB are really faster than 128, and you gain also difference in performance between generations, even if you fail to notice in standard usage. Since I'm mainly a developer, it is not useful to understand what happens with big file (10 GB), I'm interested in random load and write (database, compiling, etc). :)

Alk.

Simple Templates Using PHP - Make Your Own Templates

Mar 05, 2013 · Tony Thomas

You are right, I've done the test after long time, and the default setting of Crystal were changed, I replay the same test 2x500 and the result are quite the same (some numbers are even higher on Vertex4 with 500MB test file).

For the stability, I own a ocz vertex 2, three year runs on three different machine and never had problem, vertex 3 and 4 are stable and runs quite 14 hours a day for months. Ocz first firmware for Vertex 2 has really lots of problem, but now it is really stable.

Generally speaking 256 GB are really faster than 128, and you gain also difference in performance between generations, even if you fail to notice in standard usage. Since I'm mainly a developer, it is not useful to understand what happens with big file (10 GB), I'm interested in random load and write (database, compiling, etc). :)

Alk.

Witch version of browser is used by the WebBrowser control?

Aug 24, 2011 · Tony Thomas

It was my fault for the witch :), I'm italian and when I blog it happens that I do mistake in english language :). Hope you liked the content and forgive my syntax errors :).
Faster invoke method of unknown objects with Expression tree part2

Oct 06, 2008 · Ricci Gian Maria

Uff windows live writer plugin has inserted the wrong link :( the good link for the post is http://www.nablasoft.com/alkampfer/index.php/2008/10/06/faster-invoke-method-of-unknown-objects-with-expression-tree-part2/

User has been successfully modified

Failed to modify user

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: