This is the last in a series of four posts about how to navigate the realm of data, big and small, to connect low-income people to the jobs and amenities they need. Read the prior blog in the series.
We need a place to aggregate research and data.
Distinct from the challenge of accessing data, one of the hardest parts of crafting a research scope is identifying and accessing previous studies and the datasets they used. As we talked to industry experts during our scoping process, many would recall a landscape analysis or property study from years ago, but couldn’t recall the conclusions drawn or where to find it again.
He knew useful information had been gathered and conclusions drawn, but couldn’t point to a single database or source that housed all the information.
We found this to be particularly true in Los Angeles. There, we’ve partnered with LA THRIVES—the local collaborative table working on equitable transit-oriented development (eTOD)—and Thomas Yee, local LIIF staff and Initiative Officer of the group. Thomas has worked in L.A. community development and real estate project management for 14 years. When we started talking to Thomas about our study, he had a gut sense that previous research efforts had focused on similar topics. He knew useful information had been gathered and conclusions drawn, but couldn’t point to a single database or source that housed all the information. Thomas was able to compile a list of relevant studies, inventories and projects from the last three to five years, and his local knowledge and expertise informed our research scope by ensuring we were adding value by building upon previous research. Thomas was able to aggregate existing studies into one document, communicate with local partners about what questions remained from the research, and guide our scoping process to develop a unique question.
But we can’t always rely on having a “Thomas.” Having a centralized host that can collect and hold land and property studies for reference–and the datasets used to create them–can help us align and build on our collective research and maximize collaboration rather than duplicate efforts. The community development industry is missing a comprehensive and navigable central library to facilitate new inquiry building squarely on past work. A tool like this would bring together the functions a best practice and research resource page like the one formerly maintained by Reconnecting America, and the hard data repository of the Center for Transit Oriented Development’s TOD Database.
Though many have promoted and accepted the value proposition of big data, there are significant barriers to making it an effective tool in both identifying naturally occurring affordable housing and improving community development. To better leverage big data, we need enhanced relationships and resources to access complete and current data with both the public and private sectors. We need to re-examine how we present, sell, store, and manage big data so it can become accessible to community development practitioners and non-profits working on the ground to build more equitable communities. We need to strengthen relationships between community development organizations and data partners (like CDFIs and brokers) who are working toward similar goals.
Big data can and should be a part of community development. However, we’re not at the point yet where we can use it efficiently. Taking the steps outlined in this series can help us get to that point.
This is only the beginning of our research and we will continue to learn and share as we dive further into our case study analyses. Stay tuned for our future posts about best practices for innovative public-private partnerships and our final research summary. And read the earlier blogs in the series, which cover navigating the realm of data in eTOD, the divide between the data you want and the data you can get, and data that’s held in people, not files.
Special thanks to Erin Austin for her contributions to this blog post.