1 00:00:08,710 --> 00:00:10,040 - [Emma] So welcome everyone 2 00:00:10,040 --> 00:00:14,680 to the Long-Term Monitoring Contributor talks. 3 00:00:14,680 --> 00:00:18,610 The first presenter will be Soren Donisvitch, 4 00:00:18,610 --> 00:00:21,720 who's a master's student at the University of Vermont, 5 00:00:21,720 --> 00:00:23,087 and he will be presenting on 6 00:00:23,087 --> 00:00:26,480 "The applications and utility of a unified Continuous Forest 7 00:00:26,480 --> 00:00:29,911 Inventory Network in the Northeastern United States. 8 00:00:29,911 --> 00:00:31,950 - Thank you so much. 9 00:00:31,950 --> 00:00:34,390 So my name is Soren Donisvitch. 10 00:00:34,390 --> 00:00:36,620 I received my bachelor's from the University of Maine 11 00:00:36,620 --> 00:00:39,400 in Forestry and Survey Engineering Technology. 12 00:00:39,400 --> 00:00:41,590 I'm currently a master's student, 13 00:00:41,590 --> 00:00:43,203 graduate student at the University of Vermont 14 00:00:43,203 --> 00:00:44,590 under Tony D'Amato. 15 00:00:44,590 --> 00:00:45,424 And today I'm gonna talk about 16 00:00:45,424 --> 00:00:48,160 the Northeastern Forest Inventory Network, 17 00:00:48,160 --> 00:00:49,173 also known as NEFIN. 18 00:00:51,260 --> 00:00:56,260 So, as a brief overview, we're gonna just 19 00:00:57,180 --> 00:00:59,900 overview of like the order of operations of this talk, 20 00:00:59,900 --> 00:01:02,700 we're gonna go over the overview and background of NEFIN 21 00:01:03,620 --> 00:01:05,070 over the infrastructure and blueprints 22 00:01:05,070 --> 00:01:07,480 of the actual database itself, 23 00:01:07,480 --> 00:01:10,460 and then Utility and Potential research aspects 24 00:01:10,460 --> 00:01:12,203 of this dataset. 25 00:01:14,470 --> 00:01:15,690 So what is NEFIN? 26 00:01:15,690 --> 00:01:17,260 Merely NEFIN is a web-based 27 00:01:17,260 --> 00:01:19,410 forest inventory data acquisition tool 28 00:01:19,410 --> 00:01:24,350 that allows your browsing, and access to CFI information. 29 00:01:24,350 --> 00:01:27,750 CFI being a Continuous Forest Inventory, 30 00:01:27,750 --> 00:01:30,372 so the same plot monitored over long periods of time. 31 00:01:30,372 --> 00:01:32,136 So what does this system accomplish? 32 00:01:32,136 --> 00:01:36,410 There's a wealth of data that's out there in the Northeast 33 00:01:37,310 --> 00:01:39,210 about forest inventory data. 34 00:01:39,210 --> 00:01:42,610 And a lot of it is sparsed and separate. 35 00:01:42,610 --> 00:01:46,210 So this is aggregating a lot of those resources together. 36 00:01:46,210 --> 00:01:47,600 We're trying to accomplish 37 00:01:47,600 --> 00:01:49,573 is a systematic inventory program 38 00:01:49,573 --> 00:01:52,030 of metadata and methodological changes. 39 00:01:52,030 --> 00:01:56,900 So over time, those programs that are joined together 40 00:01:56,900 --> 00:01:58,210 may have certain changes, 41 00:01:58,210 --> 00:02:00,290 so switching from imperial to metric 42 00:02:00,290 --> 00:02:03,040 as it's over long periods of time 43 00:02:03,040 --> 00:02:05,670 and repetitive measurement. 44 00:02:05,670 --> 00:02:08,170 So this program matches and changes those, 45 00:02:08,170 --> 00:02:11,570 and then allows for kind of temporarily 46 00:02:12,620 --> 00:02:16,460 variable and much more robust dataset. 47 00:02:16,460 --> 00:02:19,370 The unified and comprehensive CFI database in of itself 48 00:02:19,370 --> 00:02:22,460 so it's truly unified so fields are unified species, 49 00:02:22,460 --> 00:02:27,460 et cetera, so this joint is fully and entirely united. 50 00:02:28,440 --> 00:02:30,380 And then also we gonna do some kind of peer review 51 00:02:30,380 --> 00:02:33,040 in technical publications related to the database itself 52 00:02:33,040 --> 00:02:36,020 how it was developed and what the relative implications 53 00:02:36,020 --> 00:02:37,623 of using these datasets are. 54 00:02:38,860 --> 00:02:41,740 So for some background, this follows a lot of work 55 00:02:41,740 --> 00:02:44,840 from the FGMC in the Continuous Forest Inventory program 56 00:02:44,840 --> 00:02:49,256 comparison tool, in which the team primarily took a bunch 57 00:02:49,256 --> 00:02:52,007 of these different CFI programs, and compare them 58 00:02:52,007 --> 00:02:54,390 and looked at the methodology, and to see 59 00:02:54,390 --> 00:02:57,220 if certain fields were able to be calculated or generated. 60 00:02:57,220 --> 00:02:59,470 So species composition, (indistinct) distribution, 61 00:02:59,470 --> 00:03:01,120 things like this. 62 00:03:01,120 --> 00:03:03,790 So this methodology comparison, the NEFIN 63 00:03:03,790 --> 00:03:06,140 in itself takes it a step further, instead 64 00:03:06,140 --> 00:03:10,180 of just evaluating, is truly joining those datasets together 65 00:03:10,180 --> 00:03:13,770 and providing a product in which users can download 66 00:03:13,770 --> 00:03:17,163 and query out certain kinds of data from a joint dataset. 67 00:03:18,320 --> 00:03:21,330 So the current CFI Program Land-base itself, 68 00:03:21,330 --> 00:03:23,700 you can see in the map on the right, 69 00:03:23,700 --> 00:03:25,910 we have currently 12 programs that have incorporated 70 00:03:25,910 --> 00:03:27,200 into the initial effort. 71 00:03:27,200 --> 00:03:29,340 So this effort is not fully joined 72 00:03:29,340 --> 00:03:31,680 in a database currently, as of yet, we're still 73 00:03:31,680 --> 00:03:34,810 creating the hierarchical database structure itself, but all 74 00:03:34,810 --> 00:03:37,493 of the methodology has been gone through and matched 75 00:03:37,493 --> 00:03:40,980 up and fields joined to a point that we're 76 00:03:40,980 --> 00:03:42,648 at the point where we're starting to build that 77 00:03:42,648 --> 00:03:44,902 database infrastructure right now. 78 00:03:44,902 --> 00:03:45,735 As you can see 79 00:03:45,735 --> 00:03:49,010 there are some holes throughout the Northeastern forest, 80 00:03:49,010 --> 00:03:50,838 but we're hopefully gonna be able to fill those 81 00:03:50,838 --> 00:03:53,820 in, there's a wealth of data that's out there. 82 00:03:53,820 --> 00:03:58,453 We've targeted 16 more potential datasets 83 00:03:58,453 --> 00:04:01,950 that we want to incorporate, yeah. 84 00:04:01,950 --> 00:04:03,070 So this is the project team. 85 00:04:03,070 --> 00:04:07,530 So Jen Pontius is the primary PI, Jim Duncan, 86 00:04:07,530 --> 00:04:11,660 Clarke Cooper, Ali Kosiba, Tony D'Amato, Emma Tait, 87 00:04:11,660 --> 00:04:13,403 Aaron Weiskittel and myself. 88 00:04:15,160 --> 00:04:18,600 So some goals for this project and of itself is, 89 00:04:18,600 --> 00:04:21,960 we wanna increase the accessibility 90 00:04:21,960 --> 00:04:25,550 of the CFI data itself, creating a truly unified dataset. 91 00:04:25,550 --> 00:04:27,740 So those fields are truly joined, makes it easier 92 00:04:27,740 --> 00:04:29,720 for users to actually use. 93 00:04:29,720 --> 00:04:31,813 Is truly simplified, sort of point 94 00:04:31,813 --> 00:04:35,900 that many fields that are usable in forest maturation 95 00:04:35,900 --> 00:04:37,460 can be easily queried out. 96 00:04:37,460 --> 00:04:39,930 It's an efficient use of methods, 97 00:04:39,930 --> 00:04:42,870 so that over time, the database can be somewhat resilient 98 00:04:42,870 --> 00:04:47,403 and maintain some kind of structure going into the future. 99 00:04:48,390 --> 00:04:50,250 We also use some demonstrating utility 100 00:04:50,250 --> 00:04:52,810 of these data resources as primary research 101 00:04:52,810 --> 00:04:55,580 within, to keep researchers within this project. 102 00:04:55,580 --> 00:04:58,260 Increase the connections between people who have access 103 00:04:58,260 --> 00:05:01,630 to these datasets, is really important for this project. 104 00:05:01,630 --> 00:05:03,590 Just to give some background 105 00:05:03,590 --> 00:05:06,220 of like what NEFIN actually does, 106 00:05:06,220 --> 00:05:08,940 and why it's a really useful tool to actually use, 107 00:05:08,940 --> 00:05:11,110 is it primarily it's a time-saver, 108 00:05:11,110 --> 00:05:12,730 so most people as a research, you come 109 00:05:12,730 --> 00:05:16,131 with a question, and CFI programs 110 00:05:16,131 --> 00:05:18,730 and data can be really useful as it deals with time, 111 00:05:18,730 --> 00:05:23,520 and they're really useful statistically 112 00:05:23,520 --> 00:05:24,728 and looking at certain questions, 113 00:05:24,728 --> 00:05:29,037 so you're looking through primary like publications, 114 00:05:29,037 --> 00:05:30,710 you're calling people you know, 115 00:05:30,710 --> 00:05:33,080 to see if you can actually find the data. 116 00:05:33,080 --> 00:05:34,750 Once you have the data, you have to download it 117 00:05:34,750 --> 00:05:37,950 whether it's CSV, Microsoft access database, 118 00:05:37,950 --> 00:05:39,830 you're then having to deal with downloading 119 00:05:39,830 --> 00:05:42,220 in a format that you can actually manage. 120 00:05:42,220 --> 00:05:44,120 Once you do that, then you really have to dig 121 00:05:44,120 --> 00:05:46,010 into the methodology and those methodologies 122 00:05:46,010 --> 00:05:48,780 like how those fields were actually kept, taken, 123 00:05:48,780 --> 00:05:52,759 was it using imperial metric, variable radius, 124 00:05:52,759 --> 00:05:54,520 fixed radius plot. 125 00:05:54,520 --> 00:05:56,540 Once you do all that, then you can finally figure out 126 00:05:56,540 --> 00:05:57,810 whether or not that dataset can actually 127 00:05:57,810 --> 00:05:59,423 be used for your question. 128 00:06:00,440 --> 00:06:01,462 You have to do this as much as possible. 129 00:06:01,462 --> 00:06:02,306 It's fundamentally more data 130 00:06:02,306 --> 00:06:05,328 you have, the better your research is really gonna be. 131 00:06:05,328 --> 00:06:07,630 Then you have to go through the process of cleaning, 132 00:06:07,630 --> 00:06:08,890 we've all probably know it, 133 00:06:08,890 --> 00:06:12,560 it's like having to find those 6,000 foot tall trees, 134 00:06:12,560 --> 00:06:16,920 or comparing species, species codes, changing them 135 00:06:16,920 --> 00:06:20,040 to FPS compatible, any of those kinds of things. 136 00:06:20,040 --> 00:06:21,255 At the end you really wanna have a nice 137 00:06:21,255 --> 00:06:23,290 and relatively clean dataset. 138 00:06:23,290 --> 00:06:25,850 And you're doing that for many as possible. 139 00:06:25,850 --> 00:06:27,552 And only then once those fields join up, 140 00:06:27,552 --> 00:06:31,679 you truly zip them together and create a unified dataset. 141 00:06:31,679 --> 00:06:33,648 And then only then, you can actually look 142 00:06:33,648 --> 00:06:37,132 at that data and then compare the actual question 143 00:06:37,132 --> 00:06:39,350 and see if that dataset can actually be used to 144 00:06:39,350 --> 00:06:40,990 answer the questions you have. 145 00:06:40,990 --> 00:06:42,300 So this is the NEFIN process. 146 00:06:42,300 --> 00:06:44,590 So the NEFIN takes into consideration, 147 00:06:44,590 --> 00:06:45,843 does all this for you. 148 00:06:45,843 --> 00:06:48,310 It's a huge time-saver and allows you to go from directly 149 00:06:48,310 --> 00:06:52,186 to a question, to usable data you can actually use and see 150 00:06:52,186 --> 00:06:56,890 if you can actually use those data to answer your questions. 151 00:06:56,890 --> 00:06:59,340 There was some background kind of infrastructure. 152 00:07:00,300 --> 00:07:01,360 In general I'm gonna speed 153 00:07:01,360 --> 00:07:03,880 through this, is that the database uploader. 154 00:07:03,880 --> 00:07:06,220 So all this data are uploaded, 155 00:07:06,220 --> 00:07:09,180 the metadata system tracks those things that change 156 00:07:09,180 --> 00:07:12,123 over time, so it allows for data processing to change, 157 00:07:12,123 --> 00:07:16,607 so you need to note when that plot was removed in 1973 158 00:07:17,950 --> 00:07:20,390 and then re-established in 2018 159 00:07:20,390 --> 00:07:23,190 but has the same plot number. 160 00:07:23,190 --> 00:07:26,380 All those kinds of minutia, and then the data uploader 161 00:07:26,380 --> 00:07:27,970 which is where you upload that data. 162 00:07:27,970 --> 00:07:30,390 The processor I'm gonna go into a little bit more. 163 00:07:30,390 --> 00:07:31,683 And then primarily as a user, 164 00:07:31,683 --> 00:07:33,120 what you're gonna have access to 165 00:07:33,120 --> 00:07:35,270 and be really looking at, is the access portal. 166 00:07:35,270 --> 00:07:37,790 So this is something that's gonna be able to visualize 167 00:07:37,790 --> 00:07:40,010 and download those reports, 168 00:07:40,010 --> 00:07:42,173 for specific kind of queries you want. 169 00:07:43,012 --> 00:07:45,810 So, the blueprints and processor itself, 170 00:07:45,810 --> 00:07:47,630 fundamentally what's really important here is 171 00:07:47,630 --> 00:07:51,150 that, those datasets are fundamentally broken down 172 00:07:51,150 --> 00:07:53,440 at the beginning, so that you have a Raw Archiving Script 173 00:07:53,440 --> 00:07:55,950 so that original image of those datasets are 174 00:07:55,950 --> 00:07:58,427 stored, can always be looked back to or 175 00:07:58,427 --> 00:08:01,170 if you want to, you can look at the primary data where 176 00:08:01,170 --> 00:08:03,713 the way it was came into NEFIN. 177 00:08:05,010 --> 00:08:06,680 And then the Transcribed script, 178 00:08:06,680 --> 00:08:09,408 so it's broken into something that we can further break 179 00:08:09,408 --> 00:08:12,213 down, so then it's broken into two things which is 180 00:08:12,213 --> 00:08:14,963 Standardized Attributes and Ancillary Attributes. 181 00:08:14,963 --> 00:08:18,400 So the standardized attributes are those attributes from 182 00:08:18,400 --> 00:08:21,510 forest maturation that people commonly use. 183 00:08:21,510 --> 00:08:25,130 So diameter, tree, height, species, et cetera. 184 00:08:25,130 --> 00:08:27,440 The ancillary data are more of those kind of 185 00:08:27,440 --> 00:08:29,560 programmatically, unique fields, 186 00:08:29,560 --> 00:08:34,230 so a fungal pathogen on a tree, stand age, Cul 187 00:08:34,230 --> 00:08:36,020 things that are kind of more unique, 188 00:08:36,020 --> 00:08:37,330 so we don't really wanna throw out. 189 00:08:37,330 --> 00:08:39,210 We wanna make sure that people who have these data 190 00:08:39,210 --> 00:08:41,423 can pull them out if they want to. 191 00:08:42,960 --> 00:08:44,660 Fundamentally, the output, 192 00:08:44,660 --> 00:08:47,800 that we really trying to target for is, 193 00:08:47,800 --> 00:08:52,480 a web access portal that is intuitive and customizable. 194 00:08:52,480 --> 00:08:55,255 So if I wanna find all of the sugar maple, 195 00:08:55,255 --> 00:08:57,391 all the plots that contain sugar maple 196 00:08:57,391 --> 00:09:01,210 within the Northeastern region, within the NEFIN network, 197 00:09:01,210 --> 00:09:03,000 you can query that out. 198 00:09:03,000 --> 00:09:04,180 In the output, 199 00:09:04,180 --> 00:09:07,670 this kind of a dataset becomes extremely valuable 200 00:09:07,670 --> 00:09:09,870 or you can see the value when you actually mesh this 201 00:09:09,870 --> 00:09:13,890 with other, another datasets such as FIA. 202 00:09:13,890 --> 00:09:18,060 So FIA in of itself is an amazing program, wealth of data 203 00:09:18,060 --> 00:09:22,810 and very useful for research and forest managers. 204 00:09:22,810 --> 00:09:24,220 But this is something that will hopefully 205 00:09:24,220 --> 00:09:25,450 enhance those datasets. 206 00:09:25,450 --> 00:09:26,550 If the data's out there, 207 00:09:26,550 --> 00:09:29,250 we might as well use as much as we can. 208 00:09:29,250 --> 00:09:31,300 So another thing is that we wanna make sure 209 00:09:31,300 --> 00:09:35,130 that the outputs in those reports are compatible with FVS. 210 00:09:35,130 --> 00:09:36,980 So those plots that contain sugar, maple, 211 00:09:36,980 --> 00:09:39,950 you can help with those upload the, upload them 212 00:09:39,950 --> 00:09:42,960 in a compatible way to FVS and run those very quickly. 213 00:09:42,960 --> 00:09:43,793 So you're able to go 214 00:09:43,793 --> 00:09:45,753 from questions to answers quite quickly. 215 00:09:48,110 --> 00:09:51,023 So some utility for these datasets, 216 00:09:52,620 --> 00:09:55,045 this kind of data is extremely useful. 217 00:09:55,045 --> 00:09:55,990 The first thing that comes 218 00:09:55,990 --> 00:09:58,120 to mind for me is geospatial aspects. 219 00:09:58,120 --> 00:10:01,860 So tying geospatial and remote sensing products 220 00:10:01,860 --> 00:10:03,960 to the ground, taking that data and actually tying it 221 00:10:03,960 --> 00:10:07,430 to real data, tying it to ground, I should say. 222 00:10:07,430 --> 00:10:10,210 And really kind of, can be really valuable use 223 00:10:10,210 --> 00:10:14,603 of this kind of a data source, as it spans for decades. 224 00:10:15,460 --> 00:10:17,100 Forest monitoring and research, 225 00:10:17,100 --> 00:10:19,708 so this is kind of, like this kind 226 00:10:19,708 --> 00:10:22,516 of dataset is really kind of, pick your question 227 00:10:22,516 --> 00:10:27,280 and run with it because there's so much you can do. 228 00:10:27,280 --> 00:10:30,289 So the long-term evaluation of regional species composition, 229 00:10:30,289 --> 00:10:32,313 is you're able to evaluate time, 230 00:10:33,579 --> 00:10:37,200 and really see general patterns emerge, regeneration 231 00:10:37,200 --> 00:10:39,520 in growth dynamics, temporary isolated 232 00:10:39,520 --> 00:10:42,020 continuous assessment of forest productivity, 233 00:10:42,020 --> 00:10:44,570 growth and yield, carbon storage and sequestration. 234 00:10:46,310 --> 00:10:48,140 When you're dealing to ask big questions, 235 00:10:48,140 --> 00:10:50,160 you have to have big data. 236 00:10:50,160 --> 00:10:55,160 So regionally this kind of a united data source allows 237 00:10:55,880 --> 00:10:57,740 for regional questions to be asked 238 00:10:57,740 --> 00:11:00,800 in a more, in just a better way. 239 00:11:00,800 --> 00:11:03,040 And more, the modeling applications are really 240 00:11:03,040 --> 00:11:04,510 exciting as well. 241 00:11:04,510 --> 00:11:06,660 Both in creating certain types of models 242 00:11:06,660 --> 00:11:08,030 whether it be a productivity model, 243 00:11:08,030 --> 00:11:09,833 or species migration model. 244 00:11:10,690 --> 00:11:12,190 In enhancement of current models, 245 00:11:12,190 --> 00:11:14,250 adding data is always really good for testing 246 00:11:14,250 --> 00:11:18,243 and evaluating current models and creating new ones. 247 00:11:19,849 --> 00:11:22,900 So this is my first semester as a master's student, 248 00:11:22,900 --> 00:11:25,220 so a lot of my research is still petitive, 249 00:11:25,220 --> 00:11:26,727 but I'm really excited to be able to use 250 00:11:26,727 --> 00:11:28,543 the NEFIN data itself. 251 00:11:29,870 --> 00:11:31,599 And so I'm quite interested 252 00:11:31,599 --> 00:11:34,090 in trying to leverage these datasets, 253 00:11:34,090 --> 00:11:35,976 in the best way possible across the region, 254 00:11:35,976 --> 00:11:38,900 so something that came to me was the forest tension zones. 255 00:11:38,900 --> 00:11:41,420 So tension zone being geographic area 256 00:11:41,420 --> 00:11:44,355 that marks a change from one type of vegetation to another. 257 00:11:44,355 --> 00:11:47,909 The hardwoods transitioning to softwoods. 258 00:11:47,909 --> 00:11:51,860 Northern hardwoods into broadleaf deciduous. 259 00:11:51,860 --> 00:11:52,974 These kind of transitions zones 260 00:11:52,974 --> 00:11:57,410 they're really interesting in forest dynamics. 261 00:11:57,410 --> 00:12:02,410 So, we're talking about mortality, ingrowth, regeneration, 262 00:12:03,000 --> 00:12:05,860 species composition, there's any number 263 00:12:05,860 --> 00:12:06,693 of things that I really wanna look 264 00:12:06,693 --> 00:12:08,930 within these kind of tensions zones, 265 00:12:08,930 --> 00:12:13,930 as well as primarily incorporating a comparative tool. 266 00:12:14,650 --> 00:12:16,544 So not a comparative tool, but a comparison 267 00:12:16,544 --> 00:12:20,150 between the NEFIN data and the FIA data. 268 00:12:20,150 --> 00:12:21,846 So the enhanced set where it's FIA 269 00:12:21,846 --> 00:12:24,900 plus NEFIN versus just FIA, it kind of gives kind 270 00:12:24,900 --> 00:12:26,954 of some statistical evaluation 271 00:12:26,954 --> 00:12:29,620 of these datasets, with these questions. 272 00:12:29,620 --> 00:12:31,790 I'm pretty sure I'm kind of probably running close 273 00:12:31,790 --> 00:12:35,840 to time here, but I really I'm here trying to 274 00:12:35,840 --> 00:12:38,070 champion the NEFIN project itself. 275 00:12:38,070 --> 00:12:40,530 If you or anyone you know has access to data 276 00:12:40,530 --> 00:12:42,600 you believe would be worthwhile 277 00:12:42,600 --> 00:12:44,339 for this project to incorporate, 278 00:12:44,339 --> 00:12:46,860 we're really open to that idea, 279 00:12:46,860 --> 00:12:48,240 we wanna try to get as much data 280 00:12:48,240 --> 00:12:50,770 as possible into this kind of this dataset, 281 00:12:50,770 --> 00:12:54,150 so we can leverage it, and make it open to the public. 282 00:12:54,150 --> 00:12:56,410 So if you have any questions, 283 00:12:56,410 --> 00:12:58,460 or wanna contact me about those data 284 00:12:58,460 --> 00:13:01,279 you might have access to, please contact me, 285 00:13:01,279 --> 00:13:02,950 my email is right below, 286 00:13:02,950 --> 00:13:05,210 and I believe it's time for questions. 287 00:13:05,210 --> 00:13:07,410 - There's a question, in the chat, 288 00:13:07,410 --> 00:13:12,410 from Jen Pontius asking what type of QA, QC goes 289 00:13:12,790 --> 00:13:15,433 into evaluating which datasets to incorporate? 290 00:13:17,490 --> 00:13:21,317 - I trying to find that, and by QA, QC, 291 00:13:22,192 --> 00:13:24,623 what do you have any? 292 00:13:27,430 --> 00:13:28,263 - [Jennifer] Yeah, I guess, Soren, 293 00:13:28,263 --> 00:13:31,870 I'm just looking for a little more detail on how we know 294 00:13:31,870 --> 00:13:35,070 whether or not it's worth bringing somebody's inventory data 295 00:13:35,070 --> 00:13:39,348 into the database, what if it's a citizen science data, what 296 00:13:39,348 --> 00:13:42,350 if it's a high school class that's collected, 297 00:13:42,350 --> 00:13:43,620 data from behind their school? 298 00:13:43,620 --> 00:13:45,834 Just curious if, how much thought has been 299 00:13:45,834 --> 00:13:47,527 put into that yet? 300 00:13:47,527 --> 00:13:50,770 - Yeah. so I was not really a part of 301 00:13:51,870 --> 00:13:55,323 that initial effort, from what I understand, a big part 302 00:13:55,323 --> 00:13:58,310 about this is that it's Continuous Forest Inventory, 303 00:13:58,310 --> 00:14:00,723 so an inventory that's repetitive, 304 00:14:01,891 --> 00:14:03,985 so you take a plot in a forest and 305 00:14:03,985 --> 00:14:06,969 you measure that same point over and over. 306 00:14:06,969 --> 00:14:09,620 That's fundamentally like a really necessary thing 307 00:14:09,620 --> 00:14:12,370 for this to be incorporated because it has to be a CFI. 308 00:14:13,560 --> 00:14:17,360 From my understanding, other aspects, citizens, 309 00:14:17,360 --> 00:14:20,193 citizen science data is always extremely valuable, 310 00:14:21,100 --> 00:14:24,940 but I would not be the first person to really say 311 00:14:24,940 --> 00:14:28,053 whether or not those data can truly be incorporated. 312 00:14:32,230 --> 00:14:35,727 - And we have a question from Charles Cogbill, 313 00:14:35,727 --> 00:14:39,500 the tension zone diagram was for pre settlement period. 314 00:14:39,500 --> 00:14:41,610 Are you interested in seeing 315 00:14:41,610 --> 00:14:44,220 if this will change with modern data? 316 00:14:44,220 --> 00:14:46,020 - Yes, yes I am, 317 00:14:46,020 --> 00:14:50,150 so that Harvard paper, that was extremely influential 318 00:14:50,150 --> 00:14:53,010 in me being a traditionally trained 319 00:14:53,010 --> 00:14:55,850 surveyor looking at witness trees that was based 320 00:14:55,850 --> 00:15:00,733 on witness trees, which is amazing, really wonderful work. 321 00:15:02,670 --> 00:15:07,670 It's just that kind of work is really amazing 322 00:15:08,330 --> 00:15:09,880 and so I am interested to see 323 00:15:09,880 --> 00:15:12,450 whether or not how that's changed, 324 00:15:12,450 --> 00:15:15,320 how that shifted, with modern data, 325 00:15:15,320 --> 00:15:19,090 with after pre-settlement, leveraging those kind of data 326 00:15:19,090 --> 00:15:21,340 sources, are really interesting, that is, that 327 00:15:21,340 --> 00:15:22,173 I just love that paper, 328 00:15:22,173 --> 00:15:26,660 I could talk about that forever, but yes, no 329 00:15:26,660 --> 00:15:29,380 I am definitely interested to see how over time 330 00:15:29,380 --> 00:15:31,893 the general trends and forest composition, 331 00:15:33,459 --> 00:15:35,710 how, I'm really interested in leveraging these kind 332 00:15:35,710 --> 00:15:36,980 of data over long periods of time. 333 00:15:36,980 --> 00:15:38,140 So yes, I would be interested 334 00:15:38,140 --> 00:15:43,140 in looking at how modern data has influenced 335 00:15:43,270 --> 00:15:45,593 or can be compared to pre-settlement forests. 336 00:15:50,279 --> 00:15:52,680 - And Jerry Carlson asks, 337 00:15:52,680 --> 00:15:54,620 are you considering private land security 338 00:15:54,620 --> 00:15:57,760 from remote sensing data of land use? 339 00:15:57,760 --> 00:16:00,710 - Yes, so that was, I think one 340 00:16:00,710 --> 00:16:02,080 of the biggest barriers to entry 341 00:16:02,080 --> 00:16:07,080 for the private corporations or entities, 342 00:16:07,440 --> 00:16:09,960 they go out and they wanna make sure that their data is 343 00:16:09,960 --> 00:16:13,600 and the plot integrity is also extremely important. 344 00:16:13,600 --> 00:16:17,568 So location much like FIA is I don't think is 345 00:16:17,568 --> 00:16:21,363 going to be really a widely available for plot location, 346 00:16:23,036 --> 00:16:26,003 but that's something that's kind of, 347 00:16:27,190 --> 00:16:31,600 above where I am in this project, but no security 348 00:16:31,600 --> 00:16:34,520 for landowners is being taken to consideration 349 00:16:34,520 --> 00:16:39,520 especially private landowners, and all of these datasets 350 00:16:40,300 --> 00:16:42,515 before incorporated, go through a process 351 00:16:42,515 --> 00:16:44,372 where we're talking with the people who own 352 00:16:44,372 --> 00:16:46,870 that data, who have control of that data 353 00:16:46,870 --> 00:16:48,690 and so that we're not sharing anything that they 354 00:16:48,690 --> 00:16:53,253 don't feel to be beyond what they want.