Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Database Issues

DZone 's Guide to

Database Issues

The most common issues with databases revolve around scale and knowledge.

· Database Zone ·
Free Resource

To learn about the current and future state of databases, we spoke with and received insights from 19 IT professionals. We asked, "What are the most common issues you see companies having with databases?" Here’s what they shared with us:

Scalability

  • Scalability is a common issue. We need to design the database and IT stack to cope with more data. Data volumes are only going up. Accessibility of the data and usability. Data is not very valuable unless you are gaining insight from it. Make sure you are deriving useful information from it.
  • The number one issue is around scale, performance, and cost efficiency. The second is vendor lock-in. The third is architectural, as customers make choices around more cost-effective solutions and build databases on top of shaky foundations, they have to go back and look at a common provider.
  • 1) A large proportion of our customer base used to struggle to meet their SLAs because their platforms couldn’t deliver high transactional throughput and low latency, at scale. For them, this resulted in business losses, reputations being compromised, and general dissatisfaction all around. They needed very high performance and low latency without having server sprawl and exponential cost, which is why they came to us. 2) As the database market evolves, companies are finding it difficult to evaluate and choose a solution. There are relational databases, columnar databases, object-oriented databases, and NoSQL databases. Sometimes, businesses are not able to differentiate between them. That’s something that we (as a software industry) needs to improve on. 3) Legacy systems still cause customers problems. This is due to skills shortages, risk integrating and updating old platforms with newer technology and sometimes frayed relationships between the business and IT, based on historical politics. 4) IOT, sensors, social media et al. have contributed to data explosion and companies are struggling to cope. Research shows that we’ve created more data in the past two years than in the entirety of the human race. 5) There are considerable benefits to decentralized data management, but it presents challenges as well. How will the data be distributed? What’s the best decentralization method? What’s the proper degree of decentralization? A significant challenge in designing and managing a distributed database results from the inherent lack of centralized knowledge of the entire database.
  • 1) Latency spikes with mixed workloads at scale. Cassandra handles ingest, but if you do with it read and analytics, it spikes. Ingestion is becoming real-time it’s not batch anymore. Almost all solutions have latency spikes. Look at benchmarks. 2) Scale — systems have limits, or it becomes impossible to manage at scale because it’s so complicated. The need for manual scaling and scale limits at petabytes and billions of requests is a big problem. 3) Data loss — can I trust the new transactional data solutions to take care of my data? 4) Silos — multiple apps on non-integrated applications without spinning up on different database instances. 5) Security — mission-critical systems of record ideas if you don’t have security all of the time is a non-starter.

Image title

Knowledge/Skill

  • Data management can be difficult if you do not have knowledgeable staffs. There are regular database patching/updating and disaster recovery tests that need to be carried out. Without adequate/established process/procedure and experienced personnel, downtime for database access may be stretched out in hours, if not days, which is obviously a very costly exercise for the companies.
  • A recent developer study from Stripe provides a lot of insight that mirrors what we have found both anecdotally with our users and in our own research: The largest constraint to business growth is access to developers, and 98 percent of respondents rate improving developer productivity as a top priority. Working with legacy databases based on 40-year-old technology is an inhibitor to developer productivity as they build modern, data-driven applications. The same survey identified that more than 40 percent of developers’ workweek is spent fixing tech debt and grappling with antiquated, legacy systems. Developers need databases that support modern app development approaches: using microservices, Agile/DevOps, deployed to on-demand, elastically scalable cloud platforms, etc.
  • Educating on the new opportunities and thinking beyond what is possible. As new problems arise, you get duct tape infrastructure to stitch together a platform that bulky and inefficient. Look at simplifying the infrastructure. Build solutions that are mission critical to the business.

Other

  • Siloing. DBAs and security people are the only ones that can veto how things get rolled out. Culturally changing across organizations and databases is more integrated and more agile; you need to upgrade to a new version of Oracle 19 versus Oracle 9. If you have a lot running on an old version, that’s a heavy lift.
  • The biggest challenge is managing the people and the process — waking up and having people change. As people bring the database into unified software development, it requires change management and process evolution. Increasing the quality and velocity of database releases requires processes and people management.
  • Legacy apps running on on-premise. Modernization of the existing infrastructure. Companies don’t know where to start, how to move to the cloud, how to modernize the application securely while reducing operational overhead, and how to automate to scale more effectively.
  • 1) Moving data to a public cloud. 2) Then, being able to handle a lot of data coming in or being stored.
  • Two particularly challenging technical issues are the manageability of databases and dealing with the operations side. Many databases are fairly easy to get started with and spin up a cluster but move into production scale, and you can begin to get lost. There is also less readily-available knowledge the deeper you go, with experts relatively hard to come by. With a database like Apache Cassandra, for example, there can be some rough edges around the management of it. If you’re not understanding 100 percent of what’s going on, you can shoot yourself in the foot quite easily.
  • A lot of companies still don’t make use of their data in real time (but would benefit from doing so). They use technology that only enables them to generate reports overnight, over weeks or even over months. They are also adopting interfaces and technologies that deeply locks them into a specific database vendor or cloud provider.
  • 1) Most companies provision a database for peak capacity, which means they are paying for more than what they actually use. This setting is not new to database administrators, because having a database that is not provisioned for peak capacity is like waiting for a disaster to happen for your business. This setting was fine when most people used databases for smaller sized data sets. When the data set size is large, this wasted capacity is a very significant cost. 2) Companies spend a lot of time and resources 'managing and operating' a database. This includes the time to deploy new versions of the database software without incurring database downtime. Again, this was an acceptable hazard to database administrators in earlier times, because the periodicity of these events are tightly managed and occur very infrequently. But in this current day-and-age, agility and flexibility in software deployments is the key to unlocking new opportunities. Many companies encounter impediments to moving fast in their software deployments because they cannot change their database schemas often and they cannot change it online.
  • The sheer complexity of the types and format of data that are coming in. Structures are radically different. Streaming IoT data with constantly changing structures. How to ingest with Kafka and S3 and all the other type? Value add is metadata as part of the infrastructure. Reduce the overhead on the IT infrastructure to manage the base technologies.

Here are the contributors of insight, knowledge, and experience:

Topics:
database

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}