DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • How to Restore a Transaction Log Backup in SQL Server
  • How to Attach SQL Database Without a Transaction Log File
  • A Deep Dive into Apache Doris Indexes
  • Spring Boot Sample Application Part 1: Introduction and Configuration

Trending

  • Beyond Code Coverage: A Risk-Driven Revolution in Software Testing With Machine Learning
  • Breaking Bottlenecks: Applying the Theory of Constraints to Software Development
  • How Trustworthy Is Big Data?
  • Performance Optimization Techniques for Snowflake on AWS
  1. DZone
  2. Data Engineering
  3. Databases
  4. Compiling Kotlin in Runtime

Compiling Kotlin in Runtime

jsr233: compile Kotlin code dynamically, after application start.

By 
Igor Manushin user avatar
Igor Manushin
·
Sep. 02, 20 · Tutorial
Likes (3)
Comment
Save
Tweet
Share
8.1K Views

Join the DZone community and get the full member experience.

Join For Free

Everybody knows tasks, which could be easily solved if you can generate and execute code instantly inside the JVM runtime. However, sometimes you have to create a separate library, so code isn't known during its compile time.

Here we observe an approach to generate code and execute it after the application start. We will use jsr233 standard for this.

As task we use popular approach - we create AOP system to parse SQL query response, to parse the result table. So developer is able to add annotations into the code and then our application will generate and execute the parser code in the runtime. However, this approach is much wider. You can create programmable configuration (like TeamCity Build DSL), you can optimize the existing code (the generated code can be constructed after settings reading, so it will be branch-free). And of course we can use this approach to avoid copy-paste in case when the language expressive power isn't enough to extract the generalized block.

All code is available on GitHub, however, you need to install Java 11 (or later) first. Article has simplified version, without logs, diagnostic, tests, etc.

Task

First of all: if you want to solve exact this task, please check the existing libraries. In the most cases there is already developed and supported solution. I can advice Hibernate or Spring Data, which can do the same.

What do we'd like to have: be able to mark some data object with attributes and some "SQL query result to DTO converter" is able to convert rows to the our class instances.

For instance, our client code can be like:

Kotlin
 




xxxxxxxxxx
1


 
1
data class DbUser(
2
    @SqlMapping(columnName = "name")
3
    val name: UserName,
4
    @SqlMapping(columnName1 = "user_email_name", columnName2 = "user_email_domain")
5
    val email: Email
6
)



As you know, to read the database response with Spring JDBC, it is better to use ResultSet interface.

There are at least two methods to extract String from column:

Kotlin
 




xxxxxxxxxx
1


 
1
String getString(int columnIndex) throws SQLException;
2

          
3
String getString(String columnLabel) throws SQLException;



Let's complicate our task a little:

  • In case of huge query result, it is better to use index-based approach (e.g. we retrieve indexes for all columns before the first row arrived, remember them and then use these indexes for the each row). This is highly important for high-performance applications, because string-based methods have to compare column names before, so it requires at least NxM unnecessary string equality calls, where N is row count and M is column count.
  • For another performance boost, we shouldn't use reflection. Therefore, we have to avoid BeanPropertyRowMapper or something like this, because they are reflection-based and too slow.
  • All property types could be not only primitive, like String, Int. They also can be complex, like self-written NonEmptyText (this class has single String field, which couldn't be null or empty).

As we observed above, it is better to extract our solution to the separate library. Therefore we don't know all types available during the our library compilation. And we'd like to have database response parsing in the way like:

Kotlin
 




xxxxxxxxxx
1
22


 
1
fun extractData(rs: ResultSet): List<DbUser> {
2
      val queryMetadata = rs.metaData
3
      val queryColumnCount = queryMetadata.columnCount
4
      val mapperColumnCount = 3
5

          
6
      require(queryColumnCount == mapperColumnCount)
7

          
8
      val columnIndex0 = rs.findColumn("name")
9
      val columnIndex1 = rs.findColumn("user_email_name")
10
      val columnIndex2 = rs.findColumn("user_email_domain")
11
      val result = mutableListOf<DbUser>()
12
      while (rs.next()) {
13
          result.add(
14
              DbUser(
15
                  name = UserName(rs.getValue(columnIndex0)),
16
                  email = Email(EmailName(rs.getValue(columnIndex1)), EmailDomain(rs.getValue(columnIndex2)))
17
              )
18
          )
19
      }
20
      return result
21
   }
22
}



One more reminder: please don't use this approach in your project without the internet observation. Moreover if you have task to parse the database rows, even you want to work with JDBC directly (without Hibernate or something) you can achieve this without code generation. And this is our homework - find a way to do this.

Kotlin Script Evaluation

For now there are two the most easy approaches to compile Kotlin in the runtime: you can use Kotlin Compiler directly or you can use jsr233 wrapper. First schema allows you to compile multiple files together, it has better extensibility power, however it is more complex for use. Second approach obviously allows you just to add new type into the current Class Loader. Of course it isn't safe, so please execute only trusted code there (also, Kotlin Script Compiler runs code in the separate restricted Class Loader, however the default security configuration doesn't prevent new process creation or file system access, so please be carefully there too).

First of all, let's define the our interface. We don't want to generate new code for each SQL query, so let's do this once per each object, which will be written from the database. For instance, interface could be:

Kotlin
 




xxxxxxxxxx
1


 
1
interface ResultSetMapper<TMappingType> : ResultSetExtractor<List<TMappingType>>
2

          
3
interface DynamicResultSetMapperFactory {
4
    fun <TMappingType : Any> createForType(clazz: KClass<TMappingType>): ResultSetMapper<TMappingType>
5
}
6

          
7
inline fun <reified TMappingType : Any> DynamicResultSetMapperFactory.createForType(): ResultSetMapper<TMappingType> {
8
    return createForType(TMappingType::class)
9
}



inline method is required to have illusion that we have real generics in the JVM. It allows ResultSetMapper construction with code like : return mapperFactory.createMapper<MyClass>().

ResultSetMapper inherits standard Spring interface:

Java
 




xxxxxxxxxx
1


 
1
@FunctionalInterface
2
public interface ResultSetExtractor<T> {
3
    @Nullable
4
    T extractData(ResultSet rs) throws SQLException, DataAccessException;
5
}



Factory implementation is responsible to generate the code by using class annotations and then execute it. So we have mockup like:

Kotlin
 




xxxxxxxxxx
1


 
1
override fun <TMappingType : Any> createForType(clazz: KClass<TMappingType>): ResultSetMapper<TMappingType> {
2
    val sourceCode = getMapperSourceCode(clazz) // generates code
3

          
4
    return compiler.compile(sourceCode) // compiles code
5
}



We have to return ResultSetMapper<TMappingType>. And it is better to create class without the generic parameters to get type knowledge for JVM (in this case GraalVM and C2 compilers can use more optimization techniques). Therefore, we compile code like:

Kotlin
 




xxxxxxxxxx
1


 
1
object : ResultSetMapper<DbUser> { // singletone, which implements the interface
2
   override fun extractData(rs: java.sql.ResultSet): List<DbUser> {
3
      /* generated code */
4
   }
5
}



For code compilation, we need three steps:

  1. Add all necessary dependencies into the classpath.
  2. Instruct Java about the possible compilers (Kotlin Script compiler is our case).
  3. By using ScriptEngineManager - execute code, which returns object above.

For the first item, let's add the following lines with gradle script:

Kotlin
 




xxxxxxxxxx
1


 
1
implementation(kotlin("reflect"))
2
implementation(kotlin("script-runtime"))
3
implementation(kotlin("compiler-embeddable"))
4
implementation(kotlin("script-util"))
5
implementation(kotlin("scripting-compiler-embeddable"))



For the second item, let's add file "src/main/resources/META-INF/services/javax.script.ScriptEngineFactory" into the jar with the following line:

Plain Text
 




xxxxxxxxxx
1


 
1
org.jetbrains.kotlin.script.jsr223.KotlinJsr223JvmLocalScriptEngineFactory



And then we have the last remaining item - execute script in the runtime:

Kotlin
 




xxxxxxxxxx
1
10


 
1
fun <TResult> compile(sourceCode: String): TResult {
2
    val scriptEngine = ScriptEngineManager()
3

          
4
    val factory = scriptEngine.getEngineByExtension("kts").factory // JVM knows, that kts extension should be compiled with KotlinJsr223JvmLocalScriptEngineFactory
5

          
6
    val engine = factory.scriptEngine as KotlinJsr223JvmLocalScriptEngine
7

          
8
    @Suppress("UNCHECKED_CAST")
9
    return engine.eval(sourceCode) as TResult
10
}


Preparing the Model

As I wrote above, let's complicate our task. Let's generate code not only for embedded JVM types, but also for self-written. Therefore, let's dig deeper into the our data model.

Let's imaging that we try to write strongly-typed code, which prevents invalid data as earlier as possible. Therefore:

  1. Instead of field userName: String we have userName: UserName, where class UserName has just one field.
  2. UserName can't be empty, therefore we should check this value in constructor.
  3. We plan to have a lot of such classes, therefore this logic should be extracted to the common block.

As one approach, we can implement the following via this way:

Create class NonEmptyText,  which has necessary field and all required checks in the constructor:

Kotlin
 




xxxxxxxxxx
1
26


 
1
abstract class NonEmptyText(val value: String) {
2
    init {
3
        require(value.isNotBlank()) {
4
            "Empty text is prohibited for ${this.javaClass.simpleName}. Actual value: $this"
5
        }
6
    }
7

          
8
    override fun equals(other: Any?): Boolean {
9
        if (this === other) return true
10
        if (javaClass != other?.javaClass) return false
11

          
12
        other as NonEmptyText
13

          
14
        if (value != other.value) return false
15

          
16
        return true
17
    }
18

          
19
    override fun hashCode(): Int {
20
        return value.hashCode()
21
    }
22

          
23
    override fun toString(): String {
24
        return value
25
    }
26
}



Next let's add one more type construction approach:

Kotlin
 




xxxxxxxxxx
1


 
1
interface NonEmptyTextConstructor<out TResult : NonEmptyText> {
2
    fun create(value: String): TResult
3
}



Next we can create UserName class:

Kotlin
 




xxxxxxxxxx
1


 
1
class UserName(value: String) : NonEmptyText(value) {
2
    companion object : NonEmptyTextConstructor<UserName> {
3
        override fun create(value: String) = UserName(value)
4
    }
5
}



Here we have UserName, which is strongly typed.  And his companion object has ability for it instances construction, so for now we can create instances without direct constructor call:

Kotlin
 




xxxxxxxxxx
1


 
1
UserName.create("123")



Now we can give this interface to anyone who wants to create instance from input string. For instance, the call for method fun <TValue> createText(input: String?, constructor: NonEmptyTextConstructor<TValue>): TValue?is createText("123", UserName), which is intuitive. It is looks like type classes for JVM.

Let's define Email with the following way:

Kotlin
 




xxxxxxxxxx
1
13


 
1
class EmailUser(value: String) : NonEmptyText(value) {
2
    companion object : NonEmptyTextConstructor<EmailUser> {
3
        override fun create(value: String) = EmailUser(value)
4
    }
5
}
6

          
7
class EmailDomain(value: String) : NonEmptyText(value) {
8
    companion object : NonEmptyTextConstructor<EmailDomain> {
9
        override fun create(value: String) = EmailDomain(value)
10
    }
11
}
12

          
13
data class Email(val user: EmailUser, val domain: EmailDomain)



We divided it to the two different types here just for complex type example. We don't need this in real life for email. In our case let's do this to test approach "read single object from two columns". Not all ORM implementation can do this, however we can.

Next let's create the DbUser type. It is our DTO, which we read from the database:

Kotlin
 




xxxxxxxxxx
1


 
1
data class DbUser(val name: UserName, val email: Email)



To generate database result parsing code, we must:

  1. Define column matching. So for name field we have to define one column name.
  2. For email field we need to define two column names.
  3. Define database reading method (moreover - even String type can be read by the different ways).

If we have "one column - one type" matching, then database reading method can be defined with the simple interface:

Kotlin
 




xxxxxxxxxx
1


 
1
interface SingleValueMapper<out TValue> {
2
    fun getValue(resultSet: ResultSet, columnIndex: Int): TValue
3
}



So during the ResultSet reading, we can do the following:

  1. Once remember what index is responsible for what column.
  2. For each line:
    1. Call getValue for the each cell.
    2. Create object from the previous item results.

As we observed before, let's think that project has a lot of types, which can be marked as "non-empty string". Therefore, we can create common mapper for them:

Kotlin
 




xxxxxxxxxx
1


 
1
abstract class NonEmptyTextValueMapper<out TResult : NonEmptyText>(
2
        private val textConstructor: NonEmptyTextConstructor<TResult>
3
) : SingleValueMapper<TResult> {
4
    override fun getValue(resultSet: ResultSet, columnIndex: Int): TResult {
5
        return textConstructor.create(resultSet.getString(columnIndex))
6
    }
7
}



We can see, we put the object constructor into this class. Next we can easily create mappers for the exact classes:

Kotlin
 




xxxxxxxxxx
1


 
1
object UserNameMapper : NonEmptyTextValueMapper<UserName>(UserName) // this object can convert column value to the UserName



Unfortunately, I didn't find the way to express mapper with extension-methods, e.g. to have some kind of extension type. In Scala you can achieve this via implicit. However, this approach isn't explicit.

As we noticed, we have complex type - Email. And it requires two columns. Therefore, interface above isn't applicable for it. As an option, we can create the separate one:

Kotlin
 




xxxxxxxxxx
1


 
1
interface DoubleValuesMapper<out TValue> {
2
    fun getValue(resultSet: ResultSet, columnIndex1: Int, columnIndex2: Int): TValue
3
}



Here we have two input columns with the single result object. This is exactly what we needed, however we should copy-paste these interfaces for the each column count option.

For now we can have combined mapper, which will be like this:

Kotlin
 




xxxxxxxxxx
1
13


 
1
abstract class TwoMappersValueMapper<out TResult, TParameter1, TParameter2>(
2
        private val parameterMapper1: SingleValueMapper<TParameter1>,
3
        private val parameterMapper2: SingleValueMapper<TParameter2>
4
) : DoubleValuesMapper<TResult> {
5
    override fun getValue(resultSet: ResultSet, columnIndex1: Int, columnIndex2: Int): TResult {
6
        return create(
7
                parameterMapper1.getValue(resultSet, columnIndex1),
8
                parameterMapper2.getValue(resultSet, columnIndex2)
9
        )
10
    }
11

          
12
    abstract fun create(parameter1: TParameter1, parameter2: TParameter2): TResult
13
}



And next Email can be read in the following way:

Kotlin
 




xxxxxxxxxx
1


 
1
object EmailUserMapper : NonEmptyTextValueMapper<EmailUser>(EmailUser)
2
object EmailDomainMapper : NonEmptyTextValueMapper<EmailDomain>(EmailDomain)
3

          
4
object EmailMapper : TwoMappersValueMapper<Email, EmailUser, EmailDomain>(EmailUserMapper, EmailDomainMapper) {
5
    override fun create(parameter1: EmailUser, parameter2: EmailDomain): Email {
6
        return Email(parameter1, parameter2)
7
    }
8
}



We we have the last remaining item - define our annotations and write the code generation.

Kotlin
 




xxxxxxxxxx
1
14


 
1
@Target(AnnotationTarget.VALUE_PARAMETER)
2
@MustBeDocumented
3
annotation class SingleMappingValueAnnotation(
4
        val constructionClass: KClass<out SingleValueMapper<*>>, // mapper requires just single field ...
5
        val columnName: String                                   // ... therefore we have one column
6
)
7

          
8
@Target(AnnotationTarget.VALUE_PARAMETER)
9
@MustBeDocumented
10
annotation class DoubleMappingValuesAnnotation(
11
        val constructionClass: KClass<out DoubleValuesMapper<*>>, // mapper required two fields ...
12
        val columnName1: String,                                  // ... therefore we have two columns
13
        val columnName2: String
14
)


Code Generation From the Annotations

First of all let's define, which code do we like to see. I used the following, which complied with the all initial criteria:

Kotlin
 




xxxxxxxxxx
1
29


 
1
object : ResultSetMapper<DbUser> {
2
   override fun extractData(rs: java.sql.ResultSet): List<DbUser> {
3
      val queryMetadata = rs.metaData
4
      val queryColumnCount = queryMetadata.columnCount
5
      val mapperColumnCount = 3
6

          
7
      require(queryColumnCount == mapperColumnCount) {
8
          val queryColumns = (0..queryColumnCount).joinToString { queryMetadata.getColumnName(it) }
9
          "Sql query has invalid columns: $mapperColumnCount is expected, however $queryColumnCount is returned. " +
10
              "Query has: $queryColumns. Mapper has: name, user_email_name, user_email_domain"
11
      }
12

          
13
      val columnIndex0 = rs.findColumn("name")
14
      val columnIndex1 = rs.findColumn("user_email_name")
15
      val columnIndex2 = rs.findColumn("user_email_domain")
16
      val result = mutableListOf<DbUser>()
17
      while (rs.next()) {
18
          val name = UserNameMapper.getValue(rs, columnIndex0)
19
          val email = EmailMapper.getValue(rs, columnIndex1, columnIndex2)
20

          
21
          val rowResult = DbUser(
22
              name = name,
23
              email = email
24
          )
25
          result.add(rowResult)
26
      }
27
      return result
28
   }
29
}



This code is generated as monolith (variables are defined first and only then used), let's extract several blocks at least with different ideas:

  1. We have N input columns, which are in the different mappers. Therefore we need different variables for them (same columns can be used in the different mappers).
  2. First of all we should verify what we received from the database. If column count is different with expected, then it is better to raise an exception with a lot of details - what we received, should we expected, etc.
  3. SQL cursors works via approach like while(rs.next()) { do }, so let's create mutable list. Ideally we can set his side initially if we know, what the row count is returned from the database.
  4. On the each loop iteration we can to read all field values and then create the resulted object.

Finally, we have the following code:

Kotlin
 




xxxxxxxxxx
1
78


 
1
private fun <TMappingType : Any> getMapperSourceCode(clazz: KClass<TMappingType>): String {
2
    return buildString {
3
        val className = clazz.qualifiedName!!
4
        val resultSetClassName = ResultSet::class.java.name
5

          
6
        val singleConstructor = clazz.constructors.single()
7
        val parameters = singleConstructor.parameters
8

          
9
        val annotations = parameters.flatMap { it.annotations.toList() }
10

          
11
        val columnNames = annotations.flatMap { getColumnNames(it) }.toSet()
12
        val columnNameToVariable = columnNames.mapIndexed { index, name -> name to "columnIndex$index" }.toMap()
13

          
14
        appendln("""
15
import com.github.imanushin.ResultSetMapper
16
object : com.github.imanushin.ResultSetMapper<$className> {
17
   override fun extractData(rs: $resultSetClassName): List<$className> {
18
      val queryMetadata = rs.metaData
19
      val queryColumnCount = queryMetadata.columnCount
20
      val mapperColumnCount = ${columnNameToVariable.size}
21
      require(queryColumnCount == mapperColumnCount) {
22
          val queryColumns = (0..queryColumnCount).joinToString { queryMetadata.getColumnName(it) }
23
          "Sql query has invalid columns: \${'$'}mapperColumnCount is expected, however \${'$'}queryColumnCount is returned. " +
24
              "Query has: \${'$'}queryColumns. Mapper has: ${columnNames.joinToString()}"
25
      }
26

          
27
""")
28

          
29
        columnNameToVariable.forEach { (columnName, variableName) ->
30
            appendln("      val $variableName = rs.findColumn(\"$columnName\")")
31
        }
32

          
33
        appendln("""
34
       val result = mutableListOf<$className>()
35
       while (rs.next()) {
36
""")
37

          
38
        parameters.forEach { parameter ->
39
            fillParameterConstructor(parameter, columnNameToVariable)
40
        }
41

          
42
        appendln("          val rowResult = $className(")
43
        appendln(
44
                parameters.joinToString("," + System.lineSeparator()) { parameter ->
45
                    "              ${parameter.name} = ${parameter.name}"
46
                }
47
        )
48

          
49
        appendln("""
50
          )
51
          result.add(rowResult)
52
      }
53
      return result
54
   }
55
}
56
""")
57
        }
58
    }
59

          
60
private fun StringBuilder.fillParameterConstructor(parameter: KParameter, columnNameToVariable: Map<String, String>) {
61
    append("              val ${parameter.name} = ")
62
    // please note: double or missing annotations aren't covered here
63
    parameter.annotations.forEach { annotation ->
64
        when (annotation) {
65
            is DoubleMappingValuesAnnotation ->
66
                appendln("${annotation.constructionClass.qualifiedName}.getValue(" +
67
                        "rs, " +
68
                        "${columnNameToVariable[annotation.columnName1]}, " +
69
                        "${columnNameToVariable[annotation.columnName2]})"
70
                )
71
            is SingleMappingValueAnnotation ->
72
                appendln("${annotation.constructionClass.qualifiedName}.getValue(" +
73
                        "rs, " +
74
                        "${columnNameToVariable[annotation.columnName]})"
75
                )
76
        }
77
    }
78
}


Why Do We Need This?

As you can see, it is easy to generate executable code instantly in runtime. I spent just several hours for this small example library (and several more for the article). However here we have workable code which is able to read rows from the database faster than the most of intuitive approaches on the stackoverflow. Moreover, because of fully controlled code generation we can also add object interning, performance measurement at a lot of other improvements and performance optimizations. And the most important point - we know exact the code which will be executed here.

Kotlin DSL can be also used for the programmable configuration. If you love your users, you can stop forcing using them json/xml/yaml files and just give DSL. It will define configuration with type-safe abilities. Just for example, please, take a look on TeamCity Build DSL — you can develop you build, you can write condition/loop to avoid step copying 10 times. You have all code highlights in the IDE. Anyway the final application need the configuration model, there aren't any real restriction on it creation.

Not all ideas can be expresses in the your programming language. And often you don't want to copy-paste code, which isn't so simple to verify. And code generation can help us here too. If you can define your implementation with annotations then let's do it in common way and hide under the interface? This approach is highly useful for JIT compiler, which has code with the all explicit types, instead of generic ones (where it is impossible to do some optimizations, such as stack allocation).

However, the most important point: please estimate first, is it really necessary to play with code generation and runtime code execution. In some projects, reflection-based approach has enough performance, which means that it is better to avoid using non-standard techniques and overcomplicate the project.

Kotlin (programming language) Database sql

Published at DZone with permission of Igor Manushin. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • How to Restore a Transaction Log Backup in SQL Server
  • How to Attach SQL Database Without a Transaction Log File
  • A Deep Dive into Apache Doris Indexes
  • Spring Boot Sample Application Part 1: Introduction and Configuration

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!