Reading OpenDocument Spreadsheets Using C#
Join the DZone community and get the full member experience.
Join For FreeExcel with its file formats is not the only spreadsheet application that is widely used. There are also users on Linux and Macs and often they are using OpenOffice and other open-source office packages that use ODF instead of OpenXML. In this post I will show you how to read Open Document spreadsheet in C#.
Importer as example
My previous post about importers showed you how to build flexible importers support to your web application. This post introduces you practical example of one of my importers. Of course, sensitive code is omitted. We start with ODS importer class and we add new methods as we go.
public class OdsImporter : ImporterBase { public OdsImporter() { } public override string[] SupportedFileExtensions { get { return new[] { "ods" }; } } public override ImportResult Import(Stream fileStream, long companyId, short year) { string contentXml = GetContentXml(fileStream); var result = new ImportResult(); var doc = XDocument.Parse(contentXml); var rows = doc.Descendants("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-row").Skip(1); foreach (var row in rows) { ImportRow(row, companyId, year, result); } return result; } }
The class given here just extends base class for importers (previous post uses interface but as I already told there you move to abstract base class when writing code for real projects).
Import method reads data from *.ods file, parses it (it is XML), finds all data rows and imports data. As you may see then first row is skipped. This is because the first row on my sheet is always headers row.
Reading ODS file
Our import method starts with getting XML from *.ods file. ODS files like OpenXml files are zipped containers that contain different files. We need content.xml as all data is kept there. To get the contents of file we use SharpZipLib library to read uploaded file as *.zip file.
private static string GetContentXml(Stream fileStream) { var contentXml = ""; using (var zipInputStream = new ZipInputStream(fileStream)) { ZipEntry contentEntry = null; while ((contentEntry = zipInputStream.GetNextEntry()) != null) { if (!contentEntry.IsFile) continue; if (contentEntry.Name.ToLower() == "content.xml") break; } if (contentEntry.Name.ToLower() != "content.xml") { throw new Exception("Cannot find content.xml"); } var bytesResult = new byte[] { }; var bytes = new byte[2000]; var i = 0; while ((i = zipInputStream.Read(bytes, 0, bytes.Length)) != 0) { var arrayLength = bytesResult.Length; Array.Resize<byte>(ref bytesResult, arrayLength + i); Array.Copy(bytes, 0, bytesResult, arrayLength, i); } contentXml = Encoding.UTF8.GetString(bytesResult); } return contentXml; }
Importing rows
Our last task is to import rows. We use special method for this as we have to handle some tricks here. To keep files smaller the cell count on row is not always the same. If we have more than one empty cell one after another then ODS keeps only one cell for sequential empty cells. This cell has attribute called number-columns-repeated and it’s value is set to the number of sequential empty cells. This is why we use two indexers for cells collection.
private void ImportRow(XElement row, ImportResult result) { var cells = (from c in row.Descendants() where c.Name == "{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-cell" select c).ToList(); var dto = new DataDto(); var count = cells.Count; var j = -1; for (var i = 0; i < count; i++) { j++; var cell = cells[i]; var attr = cell.Attribute("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}number-columns-repeated"); if (attr != null) { var numToSkip = 0; if (int.TryParse(attr.Value, out numToSkip)) { j += numToSkip - 1; } } if (i > 30) break; if (j == 0) { dto.SomeProperty = cells[i].Value; } if (j == 1) { dto.SomeOtherProperty = cells[i].Value; } // some more data reading } // save data }
Conclusion
Reading ODS files may seem to complex task but actually it is very easy if we need only data from those documents. We can use some zip-library to get the content file and then parse it to XML. It is not hard to go through the XML but there are some optimization tricks we have to know. The code here is safe to use in web applications as it is not using any API-s that may have special needs to server and infrastructure.
Opinions expressed by DZone contributors are their own.
Comments