Skip to content

pfracoastal

pfracoastal

Classes

Inputs

Inputs(blabber=True, use_heatmap=True, hm_bandwidth=1100, hm_resolution=500, hm_name='heatmap', mc_n=2000, nbounds=tuple([0.0001, 1]), storm_csv='', bldg_path='', bldg_lay=None, swel_mpath='', swel_path='', swelA_path='', swelB_path='', waveA_path='', waveB_path='', use_uncertainty=True, use_cutoff=True, use_cutoff10=False, use_eWet=True, use_waves=True, use_twl=False, use_wavecut50=False, use_erosion=False, use_insurance=False, use_contents=False, use_netcdf=False, use_outcsv=False, bddf_lut_path='', bldg_ddf_lut=None, cddf_lut_path=None, cont_ddf_lut=None, proj_prefix='', out_shp_path='', GCB_fid='location', GCB_Bded='BLDG_DED', GCB_Blim='BLDG_LIMIT', GCB_Bval='BLDG_VALUE', GCB_Cded='CNT_DED', GCB_Clim='CNT_LIM', GCB_Cval='CNT_VALUE', GCB_Bsto='NUM_STORIE', GCB_Bfou='foundation', GCB_Bbfi='BasementFi', GCB_Bffh='FIRST_FLOO', GCB_Bdem='DEMft')

Represents all of the possible input parameters for the PFRA Coastal model. While the original R code inputs and processing variables were located in global scope, this class is designed to encapsulate all of the user inputs for the model in a single object. This allows for passing the values around between functions without polluting global scope.

Initialize Inputs with relevant input parameters.

Parameters:

Name Type Description Default
blabber bool

Whether to print log messages to console and disk.

True
use_heatmap bool

Whether to generate a GeoTIF heatmap of building losses.

True
hm_bandwidth int

Search radius for points used during heatmap generation.

1100
hm_resolution int

Raster resolution used during heatmap generation.

500
hm_name str

Name for the heatmap output file (minus extension).

'heatmap'
mc_n int

Number of storms to use when generating the Monte Carlo storm suite.

2000
nbounds tuple

Upper and lower bounds for the probability distribution function used to generate a Monte Carlo storm suite.

tuple([0.0001, 1])
storm_csv str

Path to CSV file containing an existing Monte Carlo storm suite to use instead of generating a new one.

''
bldg_path str

Path to building point shapefile.

''
bldg_lay str

Not used.

None
swel_mpath str

Not used.

''
swel_path str

Not used.

''
swelA_path str

Path to surge (Best Estimate) point shapefile.

''
swelB_path str

Path to surge (84% Confidence Limit) point shapefile.

''
waveA_path str

Path to wave height (Best Estimate) point shapefile.

''
waveB_path str

Path to wave height (84% Confidence Limit) point shapefile.

''
use_uncertainty bool

Whether to include uncertainty in the analysis. This value should always be set to "True" as the model always assumes uncertainty is applied.

True
use_cutoff bool

Whether to apply a cutoff to the damage function. This value should always be set to "True" as the model always assumes a cutoff is applied.

True
use_cutoff10 bool

Whether to apply a cutoff at 10% damage. This value should always be set to "False" as the model does not assume a 10% cutoff is applied.

False
use_eWet bool

Whether to include minimum wetting in the analysis. This value should always be set to "True" as the model assumes no damage when ground is dry.

True
use_waves bool

Whether to include wave shapefiles in the analysis. This value should always be set to "True" as the model always assumes wave data is present.

True
use_twl bool

Whether to include total water level in the analysis. This value should always be set to "False" as the model always generates a TWL instead of assuming one is provided.

False
use_wavecut50 bool

Whether to apply a wave cutoff at 50% damage. This value should always be set to "False" as the model does not assume a 50% wave cutoff is applied.

False
use_erosion bool

Whether to include erosion effects in the analysis. This value should always be set to "False" as the model does not assume that erosion is included.

False
use_insurance bool

Whether to adjust losses in the analysis based on insurance deductibles and limits. This value should always be set to "False" as the model does not currently support this feature.

False
use_contents bool

Whether to include contents damage in the analysis. This value should always be set to "False" as the model does not currently support this feature.

False
use_netcdf bool

Not used.

False
use_outcsv bool

Whether to output intermediate loss tables for each building as CSV files.

False
bddf_lut_path str

Path to building damage function CSV file.

''
bldg_ddf_lut DataFrame

Building damage function lookup table stored in a pandas DataFrame. This is set automatically by the bddf_lut_path setter and should not be set directly.

None
cddf_lut_path str

Path to contents damage function CSV file. This value should always be set to "None" or an empty string as the model does not currently support contents damage.

None
cont_ddf_lut DataFrame

Contents damage function lookup table as a pandas DataFrame. This is set automatically by the cddf_lut_path setter and should not be set directly.

None
proj_prefix str

A text prefix to prepend to the names of all output files.

''
out_shp_path str

Path to a directory where all output files will be saved.

''
GCB_fid str

Field name in the building shapefile that contains the unique building identifier. Required.

'location'
GCB_Bded str

Field name for building insurance deductible in the building shapefile.

'BLDG_DED'
GCB_Blim str

Field name for building insurance limit in the building shapefile.

'BLDG_LIMIT'
GCB_Bval str

Field name for building replacement cost in the building shapefile. Required.

'BLDG_VALUE'
GCB_Cded str

Field name for contents insurance deductible in the building shapefile.

'CNT_DED'
GCB_Clim str

Field name for contents insurance limit in the building shapefile.

'CNT_LIM'
GCB_Cval str

Field name for contents replacement cost in the building shapefile.

'CNT_VALUE'
GCB_Bsto str

Field name for number of stories in the building shapefile. Required.

'NUM_STORIE'
GCB_Bfou str

Field name for foundation type in the building shapefile. Required.

'foundation'
GCB_Bbfi str

Field name for basement finish type in the building shapefile. Required.

'BasementFi'
GCB_Bffh str

Field name for first floor height in the building shapefile. Required.

'FIRST_FLOO'
GCB_Bdem str

Field name for ground elevation in the building shapefile. Required.

'DEMft'

Returns:

Type Description
object

An instance of the Inputs class with all input parameters set as attributes.

Source code in src/inland_consequences/coastal/pfracoastal.py
def __init__(self, blabber=True, use_heatmap=True, hm_bandwidth=1100, hm_resolution=500, hm_name="heatmap",
    mc_n=2000, nbounds=tuple([0.0001, 1]), storm_csv='', bldg_path='', bldg_lay=None,
    swel_mpath='', swel_path='', swelA_path='', swelB_path='', waveA_path='', waveB_path='', use_uncertainty=True,
    use_cutoff=True, use_cutoff10=False, use_eWet=True, use_waves=True, use_twl=False, use_wavecut50=False, use_erosion=False,
    use_insurance=False, use_contents=False, use_netcdf=False, use_outcsv=False, bddf_lut_path='',
    bldg_ddf_lut=None, cddf_lut_path=None, cont_ddf_lut=None, proj_prefix='', out_shp_path='',
    GCB_fid="location", GCB_Bded="BLDG_DED", GCB_Blim="BLDG_LIMIT", GCB_Bval="BLDG_VALUE", GCB_Cded="CNT_DED", 
    GCB_Clim="CNT_LIM", GCB_Cval="CNT_VALUE", GCB_Bsto="NUM_STORIE", GCB_Bfou="foundation", GCB_Bbfi="BasementFi", GCB_Bffh="FIRST_FLOO", GCB_Bdem="DEMft") -> object:

    """
    Initialize Inputs with relevant input parameters.

    Args:
        blabber (bool): Whether to print log messages to console and disk.
        use_heatmap (bool): Whether to generate a GeoTIF heatmap of building losses.
        hm_bandwidth (int): Search radius for points used during heatmap generation.
        hm_resolution (int): Raster resolution used during heatmap generation.
        hm_name (str): Name for the heatmap output file (minus extension).
        mc_n (int): Number of storms to use when generating the Monte Carlo storm suite.
        nbounds (tuple): Upper and lower bounds for the probability distribution function used to generate a Monte Carlo storm suite.
        storm_csv (str): Path to CSV file containing an existing Monte Carlo storm suite to use instead of generating a new one.
        bldg_path (str): Path to building point shapefile.
        bldg_lay (str): Not used.
        swel_mpath (str): Not used.
        swel_path (str): Not used.
        swelA_path (str): Path to surge (Best Estimate) point shapefile.
        swelB_path (str): Path to surge (84% Confidence Limit) point shapefile.
        waveA_path (str): Path to wave height (Best Estimate) point shapefile.
        waveB_path (str): Path to wave height (84% Confidence Limit) point shapefile.
        use_uncertainty (bool): Whether to include uncertainty in the analysis. This value should always be set to "True" as the model always assumes uncertainty is applied.
        use_cutoff (bool): Whether to apply a cutoff to the damage function. This value should always be set to "True" as the model always assumes a cutoff is applied.
        use_cutoff10 (bool): Whether to apply a cutoff at 10% damage. This value should always be set to "False" as the model does not assume a 10% cutoff is applied.
        use_eWet (bool): Whether to include minimum wetting in the analysis. This value should always be set to "True" as the model assumes no damage when ground is dry.
        use_waves (bool): Whether to include wave shapefiles in the analysis. This value should always be set to "True" as the model always assumes wave data is present.
        use_twl (bool): Whether to include total water level in the analysis. This value should always be set to "False" as the model always generates a TWL instead of assuming one is provided.
        use_wavecut50 (bool): Whether to apply a wave cutoff at 50% damage. This value should always be set to "False" as the model does not assume a 50% wave cutoff is applied.
        use_erosion (bool): Whether to include erosion effects in the analysis. This value should always be set to "False" as the model does not assume that erosion is included.
        use_insurance (bool): Whether to adjust losses in the analysis based on insurance deductibles and limits. This value should always be set to "False" as the model does not currently support this feature.
        use_contents (bool): Whether to include contents damage in the analysis. This value should always be set to "False" as the model does not currently support this feature.
        use_netcdf (bool): Not used.
        use_outcsv (bool): Whether to output intermediate loss tables for each building as CSV files.
        bddf_lut_path (str): Path to building damage function CSV file.
        bldg_ddf_lut (pd.DataFrame): Building damage function lookup table stored in a pandas DataFrame. This is set automatically by the bddf_lut_path setter and should not be set directly.
        cddf_lut_path (str): Path to contents damage function CSV file. This value should always be set to "None" or an empty string as the model does not currently support contents damage.
        cont_ddf_lut (pd.DataFrame): Contents damage function lookup table as a pandas DataFrame. This is set automatically by the cddf_lut_path setter and should not be set directly.
        proj_prefix (str): A text prefix to prepend to the names of all output files.
        out_shp_path (str): Path to a directory where all output files will be saved.
        GCB_fid (str): Field name in the building shapefile that contains the unique building identifier. Required.
        GCB_Bded (str): Field name for building insurance deductible in the building shapefile.
        GCB_Blim (str): Field name for building insurance limit in the building shapefile.
        GCB_Bval (str): Field name for building replacement cost in the building shapefile. Required.
        GCB_Cded (str): Field name for contents insurance deductible in the building shapefile.
        GCB_Clim (str): Field name for contents insurance limit in the building shapefile.
        GCB_Cval (str): Field name for contents replacement cost in the building shapefile.
        GCB_Bsto (str): Field name for number of stories in the building shapefile. Required.
        GCB_Bfou (str): Field name for foundation type in the building shapefile. Required.
        GCB_Bbfi (str): Field name for basement finish type in the building shapefile. Required.
        GCB_Bffh (str): Field name for first floor height in the building shapefile. Required.
        GCB_Bdem (str): Field name for ground elevation in the building shapefile. Required.

    Returns:
        An instance of the Inputs class with all input parameters set as attributes.
    """

    self.blabber = blabber
    self.use_heatmap = use_heatmap
    self.hm_bandwidth = hm_bandwidth
    self.hm_resolution = hm_resolution
    self.hm_name = hm_name
    self.mc_n = mc_n
    self.nbounds = nbounds

    self.storm_csv = storm_csv

    self.bldg_path = bldg_path
    self.bldg_lay = bldg_lay

    self.swel_mpath = swel_mpath
    self.swel_path = swel_path
    self.swelA_path = swelA_path
    self.swelB_path = swelB_path
    self.waveA_path = waveA_path
    self.waveB_path = waveB_path

    self.use_uncertainty = use_uncertainty
    self.use_cutoff = use_cutoff
    self.use_cutoff10 = use_cutoff10
    self.use_eWet = use_eWet
    self.use_waves = use_waves
    self.use_twl = use_twl
    self.use_wavecut50 = use_wavecut50
    self.use_erosion = use_erosion
    self.use_insurance = use_insurance
    self.use_contents = use_contents
    self.use_netcdf = use_netcdf
    self.use_outcsv = use_outcsv

    self.bddf_lut_path = bddf_lut_path
    self.bldg_ddf_lut = bldg_ddf_lut

    self.cddf_lut_path = cddf_lut_path
    self.cont_ddf_lut = cont_ddf_lut

    self.out_shp_path = out_shp_path
    self.proj_prefix = proj_prefix

    self.bldg_attr_map = pd.DataFrame.from_dict(self.BLDG_ATTR_MAP_DATA)
    self.bldg_attr_map.index = list(range(self.bldg_attr_map.shape[0])) # to make sure the row index has labels

    self.guts_attr_map = pd.DataFrame.from_dict(self.GUTS_ATTR_MAP_DATA)

    self.GCB_fid = GCB_fid
    self.GCB_Bded = GCB_Bded
    self.GCB_Blim = GCB_Blim
    self.GCB_Bval = GCB_Bval
    self.GCB_Cded = GCB_Cded
    self.GCB_Clim = GCB_Clim
    self.GCB_Cval = GCB_Cval
    self.GCB_Bsto = GCB_Bsto
    self.GCB_Bfou = GCB_Bfou
    self.GCB_Bbfi = GCB_Bbfi
    self.GCB_Bffh = GCB_Bffh
    self.GCB_Bdem = GCB_Bdem
Attributes
blabber property writable
blabber: bool

True to print log messages to console and disk.

use_stormsuite property
use_stormsuite: bool

True if the user has provided a path to a CSV file in self.storm_csv to use for the Monte Carlo storm suite instead of generating a new one. False if self.storm_csv does not point to a storm suite file (PFRACoastal will generate a new storm suite in this case).

bddf_lut_path property writable
bddf_lut_path: str

Path to the building damage function CSV file.

blabfile property
blabfile: str

Path to the file where log messages will be written if self.blabber is True. The log file will be named using the project prefix and saved in the output file directory.

GCB_fid property writable
GCB_fid: str

Field name in the building shapefile that contains the unique building identifier.

GCB_Bded property writable
GCB_Bded: str

Field name for building insurance deductible in the building shapefile.

GCB_Blim property writable
GCB_Blim: str

Field name for building insurance limit in the building shapefile.

GCB_Bval property writable
GCB_Bval: str

Field name for building replacement cost in the building shapefile.

GCB_Cded property writable
GCB_Cded: str

Field name for contents insurance deductible in the building shapefile.

GCB_Clim property writable
GCB_Clim: str

Field name for contents insurance limit in the building shapefile.

GCB_Cval property writable
GCB_Cval: str

Field name for contents replacement cost in the building shapefile.

GCB_Bsto property writable
GCB_Bsto: str

Field name for number of stories in the building shapefile.

GCB_Bfou property writable
GCB_Bfou: str

Field name for foundation type in the building shapefile.

GCB_Bbfi property writable
GCB_Bbfi: str

Field name for basement finish in the building shapefile.

GCB_Bffh property writable
GCB_Bffh: str

Field name for first floor height in the building shapefile.

GCB_Bdem property writable
GCB_Bdem: str

Field name for ground elevation in the building shapefile.

Functions

PFRACoastal

PFRACoastal()

Class to run the PFRA Coastal model. This class is designed to encapsulate all of the functionality for running the model, including validating inputs, running the analysis, and generating outputs.

Source code in src/inland_consequences/coastal/pfracoastal.py
def __init__(self) -> object:
    pass
Functions
runPFRACoastal
runPFRACoastal(inputs: Inputs) -> None

Run the PFRA Coastal model using the provided inputs.

Parameters:

Name Type Description Default
inputs Inputs

An instance of the Inputs class containing all input parameters for the model.

required
Source code in src/inland_consequences/coastal/pfracoastal.py
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
    def runPFRACoastal(self, inputs: Inputs) -> None:
        """
        Run the PFRA Coastal model using the provided inputs.

        Args:
            inputs (Inputs): An instance of the Inputs class containing all input parameters for the model.
        """
        lib = _PFRACoastal_Lib()

        # configure logging
        logger.setLevel("INFO")
        if inputs.blabfile:
            fh = logging.FileHandler(inputs.blabfile, mode='w')
            fh.setLevel("INFO")
            logger.addHandler(fh)

        if inputs.blabber:
            ch = logging.StreamHandler()
            ch.setLevel("INFO")
            logger.addHandler(ch)

    	##############
        #  	STEP 0 - initiate parallel processing
        fullAnalysis_start = monotonic()

        lib.write_log("")
        lib.write_log("**********")
        lib.write_log("BEGIN STEP 0. Setup...")
        lib.write_log("")

        lib.write_log("Summary of inputs:")

        lib.write_log(f"Buildings: {inputs.bldg_path}")
        lib.write_log("Input Building Attributes...")
        lib.write_log(inputs.bldg_attr_map.iloc[1:12, inputs.bldg_attr_map.columns.get_indexer(["DESC", "IN"])])
        lib.write_log(f"SWEL A: {inputs.swelA_path}")
        lib.write_log(f"SWEL B: {inputs.swelB_path}")
        lib.write_log(f"Use Waves: {inputs.use_waves}")
        if inputs.use_waves:
            lib.write_log(f"WAVES A: {inputs.waveA_path}")
            lib.write_log(f"WAVES B: {inputs.waveB_path}")
        lib.write_log(f"Project Prefix: {inputs.proj_prefix}")
        lib.write_log(f"OUTPUT directory: {inputs.out_shp_path}")
        lib.write_log(f"Use Uncertainty: {inputs.use_uncertainty}")
        lib.write_log(f"Write internal tables: {inputs.use_outcsv}")
        lib.write_log(f"Use Insurance: {inputs.use_insurance}")
        lib.write_log(f"Use Contents: {inputs.use_contents}")
        if inputs.use_contents:
            lib.write_log(f"Contents DDF lut: {inputs.cddf_lut_path}")
        lib.write_log(f"Building DDF lut: {inputs.bddf_lut_path}")

        lib.write_log(" ")
        lib.write_log("Validating Inputs...")
        in_req_shps = [inputs.bldg_path, inputs.swelA_path, inputs.swelB_path]
        in_opt_shps = [inputs.waveA_path, inputs.waveB_path]
        if inputs.use_waves:
            in_req_shps.extend(in_opt_shps)

        # validate inputs
        num_errs = 0

        # req shp files
        err_msgs = []
        crs_name_list = []
        for path in in_req_shps:
            cur_crs = CRS.from_user_input(gpd.read_file(path).crs)
            axis_unit_list = [axis.unit_name for axis in cur_crs.axis_info]
            ft_unit_axis = [unit for unit in axis_unit_list if "feet" in unit.lower() or 'foot' in unit.lower() or 'ft' in unit.lower()]

            if not os.path.exists(path):
                err_msgs.append(f"File {path} not found")
            elif os.path.splitext(path)[1] != '.shp':
                err_msgs.append(f"File format of {path} is invalid. Must be an esri shapefile")
            elif not cur_crs.is_projected:
                err_msgs.append(f'CRS of file {path} is not projected')
            elif len(ft_unit_axis) == 0:
                err_msgs.append(f'CRS of file {path} does not use US feet as linear unit')
            else:
                crs_name_list.append(cur_crs.to_epsg())

        if len(set(crs_name_list)) > 1 or len(set(crs_name_list)) == 0:
            err_msgs.append("Input shapefiles' CRS do not match or do not have EPSG codes")

        # req csv files
        if not os.path.exists(inputs.bddf_lut_path):
            err_msgs.append(f'File {inputs.bddf_lut_path} not found'.format())
        elif os.path.splitext(inputs.bddf_lut_path)[1] != '.csv':
            err_msgs.append(f'File format of {inputs.bddf_lut_path} is invalid. Must be a csv')

        # project dir (req)
        if inputs.out_shp_path in ('', None) or not os.path.isdir(inputs.out_shp_path):
            err_msgs.append("Output directory not set, is null, or is not a valid directory")
        elif not os.path.exists(inputs.out_shp_path):
            err_msgs.append("Output directory not found")

        # project prefix (req)
        alphanum_proj_prefix = ''.join([char for char in inputs.proj_prefix if char.isalnum() or char=='_'])
        if inputs.proj_prefix in (None, ''):
            err_msgs.append("Project prefix not set, but is required")
        elif alphanum_proj_prefix != inputs.proj_prefix:
            lib.write_log("\tWARNING: Removing invalid characters from project prefix. Updated project prefix = '{0}'".format(alphanum_proj_prefix))
            inputs.proj_prefix = alphanum_proj_prefix

        # check optional inputs and print out warnings
        if inputs.storm_csv not in ('', None):
            if not os.path.exists(inputs.storm_csv):
                lib.write_log("\tWARNING: File {0} not found. Use Stormsuite set to False".format(inputs.storm_csv))
            elif os.path.splitext(inputs.storm_csv)[1] != '.csv':
                lib.write_log("\tWARNING: File format of {0} is invalid. Must be a csv. Use Stormsuite set to False".format(inputs.storm_csv))

        # print out error messages
        for msg in err_msgs:
            lib.write_log('\tERROR: {0}'.format(msg))
            num_errs += 1

        if num_errs > 0:
            lib.write_log("")
            lib.write_log("VALIDATION FAILED - See above error(s)")
            lib.haltscript()

        lib.write_log("")
        lib.write_log("Running Average Annualized Losses...")
        lib.write_log("END STEP 0.")


        ##############
        #  	STEP 1 - Get building points
        #	Load shapefile, drop unrequired fields, rename required fields fields, validate data, export to shapefile
        # 	RESULTS:
        # 	BUILDING.SPDF = Spatial Points Data Frame _BUILDINGS.shp
        # Start timer for step 1
        step1_start = monotonic()
        lib.write_log(' ')
        lib.write_log(' BEGIN STEP 1. Building Data...')
        BUILDING_SPDF = lib.formatBuildings(inputs)
        lib.write_log('Bulding Sample:')
        lib.write_log(str(BUILDING_SPDF.head()))

        # write resulting shapefile
        out_shp_lay = fr'{inputs.proj_prefix}_BUILDINGS'
        out_shp_dsn = os.path.join(inputs.out_shp_path, fr'{out_shp_lay}.shp')
        lib.write_log(fr'Writing Buildingds to: {out_shp_dsn}')
        BUILDING_SPDF.to_file(out_shp_dsn)

        lib.write_log('END STEP 1.')
        # Calculate seconds elapsed for step 1
        step1_elapsed = math.ceil(monotonic() - step1_start)
        lib.write_log(f'Step 1: {step1_elapsed} sec elapsed')
        ##############

        ##############
		#  	STEP 2a - Get surge points
		#	load the surge points
		#		if uncertainty, also load SWEL_cl84, make PSURGERR
		# 	RESULTS:
		# 		PSURGE.SPDF
		# 		PSURGE84.SPDF, optional
		# 		PSURGERR.SPDF, optional

        step2a_start = monotonic()
        lib.write_log(' ')
        lib.write_log('BEGIN STEP 2a. Import Surge Data...')

        # create extensible swel attribute map
        lib.write_log('.creating SWEL attribute map.')
        #open the dbf
        temptab = gpd.read_file(inputs.swelA_path)
        #get all names that begin with 'e'
        tempcols = [col for col in temptab.columns if col.startswith('e')]
        #get the numeric portion of those columns
        rp_avail = [lib.removeNonNumeric(col) for col in tempcols]

        # add these to the attribute map
        surge_attr_in = ['SID'] + temptab.columns[temptab.columns.get_indexer(tempcols).tolist()].to_list()
        SWEL_attr_out = ['SID'] + [f's{x}' for x in rp_avail]
        surge_attr_desc = ['node ID'] + ['surge elevation' for _ in range(len(SWEL_attr_out)-1)]
        surge_attr_default = -99999 
        surge_attr_check = [0] + [1 for _ in range(len(SWEL_attr_out)-1)]
        surge_attr_type = ['numeric' for _ in range(len(SWEL_attr_out))]
        surge_attr_ddc = [0] + [1 for _ in range(len(SWEL_attr_out)-1)]
        SWEL_attr_map = pd.DataFrame({'IN':surge_attr_in,'OUT':SWEL_attr_out,'DESC':surge_attr_desc,'TYPE':surge_attr_type,'DEF':surge_attr_default,'CHECK':surge_attr_check,'DDC':surge_attr_ddc})

        # load SWEL BE
        lib.write_log('Surge A (Best Estimate)')
        PSURGE_SPDF = lib.formatSurge(inputs.swelA_path, SWEL_attr_map)
        PSURGE_DF = PSURGE_SPDF.drop(columns='geometry')
        lib.write_log('Sample Surge A:')
        lib.write_log(str(PSURGE_SPDF.head()))

        # if uncertainty, load swel cl84 and create surge SD
        if inputs.use_uncertainty:
            # load SWEL B
            lib.write_log('Surge B (84CL Estimate)')
            PSURGE84_SPDF = lib.formatSurge(inputs.swelB_path, SWEL_attr_map)
            PSURGE84_DF = PSURGE84_SPDF.drop(columns='geometry')
            lib.write_log('Sample Surge B:')
            lib.write_log(str(PSURGE84_SPDF.head()))

            # subtract A from B to get SD
            SWERR_attr_map = SWEL_attr_map.copy()
            sel = SWEL_attr_map.query("DESC == 'surge elevation'").index.to_list()
            SWERR_attr_map.loc[sel, 'OUT'] = SWERR_attr_map.loc[sel, 'OUT'].str.replace('s','sx', regex=False)
            SWERR_attr_map.loc[sel, 'DESC'] = 'surge error'

            PSURGERR_SPDF = PSURGE84_SPDF.copy()
            PSURGERR_DF = PSURGERR_SPDF.drop(columns='geometry')
            PSURGERR_DF.columns = SWERR_attr_map["OUT"].tolist()

            minus_tab = PSURGERR_DF.copy()
            tempcols = PSURGERR_DF.columns.get_indexer([col for col in PSURGERR_DF.columns.to_list() if col.startswith('sx')])

            # create a copy of PSURGE and PSURGE84 that converts -999 to NA for calculating 
			# differences
			# Erase copies with the trash collection
            PSURGE_copy_DF = PSURGE_DF.copy()
            PSURGE_copy_DF.mask(PSURGE_copy_DF.lt(-999), np.nan, inplace=True)

            PSURGE84_copy_DF = PSURGE84_DF.copy()
            PSURGE84_copy_DF.mask(PSURGE84_copy_DF.lt(-999), np.nan, inplace=True)

            for i in tempcols:
                minus_tab.iloc[:,i] = PSURGE84_copy_DF.iloc[:,i] - PSURGE_copy_DF.iloc[:,i]

            minus_tab.mask(minus_tab.lt(0), 0, inplace=True)
            PSURGERR_DF = minus_tab.copy()

            fill_value = pd.to_numeric(SWERR_attr_map['DEF'].iloc[-1])
            PSURGERR_DF.mask(PSURGERR_DF.isna(), fill_value, inplace=True)
            lib.write_log('Sample Surge Error (B-A):')
            lib.write_log(str(PSURGERR_DF.head()))

        # Find the fully NULL nodes in PSURGE .
        # remove those features from each feature class
        goodrows = PSURGE_DF.iloc[:,SWEL_attr_map.shape[0]-1].ne(pd.to_numeric(SWEL_attr_map["DEF"].iat[-1])).to_list()

        PSURGE_DF = PSURGE_DF.iloc[goodrows,:]
        psurge_geom = PSURGE_SPDF['geometry'].iloc[goodrows]
        PSURGE84_DF = PSURGE84_DF.iloc[goodrows,:]
        psurge84_geom = PSURGE84_SPDF['geometry'].iloc[goodrows]
        PSURGERR_DF = PSURGERR_DF.iloc[goodrows,:]
        psurgerr_geom = PSURGERR_SPDF['geometry'].iloc[goodrows]

        PSURGE_SPDF = gpd.GeoDataFrame(data=PSURGE_DF, geometry=psurge_geom, crs=psurge_geom.crs)
        PSURGE84_SPDF = gpd.GeoDataFrame(data=PSURGE84_DF, geometry=psurge84_geom, crs=psurge84_geom.crs)
        PSURGERR_SPDF = gpd.GeoDataFrame(data=PSURGERR_DF, geometry=psurgerr_geom, crs=psurgerr_geom.crs)

        lib.write_log('END STEP 2a.')
        step2a_elapsed = math.ceil(monotonic() - step2a_start)
        lib.write_log(f'Step 2a: {step2a_elapsed} sec elapsed')

		##############
		#  	STEP 2b - Get waves
		#	load the wave points
		#		if uncertainty, load WAVE_cl84, make WAVE_SD
		# 	RESULTS:
		# 		PWAVE.SPDF, future optional
		# 		PWAVE84.SPDF, optional
		# 		PWAVERR.SPDF, optional
        step2b_start = monotonic()
        lib.write_log(' ')
        lib.write_log('BEGIN STEP 2b. Import Wave Data...')

        if inputs.use_waves:
            # Create extensible wave attribute map
            lib.write_log('.creating WAV attribute map.')
            WV_attr_map = SWEL_attr_map.copy()
            sel = WV_attr_map['CHECK'] == 1
            WV_attr_map.loc[sel, 'OUT'] = WV_attr_map.loc[sel, 'OUT'].str.replace('s','w', regex=False)
            WV_attr_map['DESC'] = WV_attr_map['DESC'].str.replace('surge','wave')
            WV_attr_map['DESC'] = WV_attr_map['DESC'].str.replace('elevation','height')
            sel = WV_attr_map['CHECK'] == 1
            WV_attr_map.loc[sel,'DEF'] = -99999

            # load Wave BE
            lib.write_log('Wave A (Best Estimate)')
            PWAVE_SPDF = lib.formatSurge(inputs.waveA_path, WV_attr_map)
            PWAVE_DF = PWAVE_SPDF.drop(columns='geometry')
            lib.write_log('Sample Wave A:')
            lib.write_log(str(PWAVE_SPDF.head()))

            # if uncertainty, load waves cl84 and create surge SD
            if inputs.use_uncertainty:
                # load SWEL B
                lib.write_log('Wave B (84CL Estimate)')
                PWAVE84_SPDF = lib.formatSurge(inputs.waveB_path, WV_attr_map)
                PWAVE84_DF = PWAVE84_SPDF.drop(columns='geometry')
                lib.write_log('Sample Wave B:')
                lib.write_log(str(PWAVE84_SPDF.head()))

                # subtract A from B to get SD
                WVERR_attr_map = WV_attr_map.copy()
                sel = WV_attr_map['DESC'] == 'wave height'
                WVERR_attr_map.loc[sel,'OUT'] = WV_attr_map['OUT'].str.replace('w','wx')
                WVERR_attr_map.loc[sel,'DESC'] = 'wave error'

                PWAVERR_SPDF = PWAVE84_SPDF.copy()
                PWAVERR_DF = PWAVERR_SPDF.drop(columns='geometry')
                current_cols = [c for c in PWAVERR_DF.columns.to_list()]
                new_cols = WVERR_attr_map['OUT'].tolist()
                new_cols = {old: new for old, new in zip(current_cols, new_cols)}
                PWAVERR_DF.rename(columns=new_cols, inplace=True)

                minus_tab = PWAVERR_DF.copy()
                tempcols = PWAVERR_DF.columns.get_indexer([col for col in PWAVERR_DF.columns.to_list() if col.startswith("w")])

				# create a copy of PSURGE and PSURGE84 that converts -99999 to NA for calculating 
				# differences
				# Erase copies with the trash collection
                PWAVE_copy_DF = PWAVE_DF.copy()
                PWAVE_copy_DF.mask(PWAVE_copy_DF.lt(-999), np.nan, inplace=True)

                PWAVE84_copy_DF = PWAVE84_DF.copy()
                PWAVE84_copy_DF.mask(PWAVE84_copy_DF.le(-999), np.nan, inplace=True)
                for i in tempcols:
                    minus_tab.iloc[:,i] = PWAVE84_copy_DF.iloc[:,i] - PWAVE_copy_DF.iloc[:,i]

                minus_tab.mask(minus_tab.lt(0), 0, inplace=True)
                PWAVERR_DF = minus_tab

                fill_value = pd.to_numeric(WVERR_attr_map['DEF'].iloc[-1])
                PWAVERR_DF.mask(PWAVERR_DF.isna(), fill_value, inplace=True)
                lib.write_log('Sample Surge Error (B-A):')
                lib.write_log(str(PWAVERR_DF.head()))

                # Find the fully NULL nodes in PSURGE and get the SIDs.
                # remove those features from each feature class
                goodrows = PWAVE_DF.iloc[:,WV_attr_map.shape[0]-1].ne(pd.to_numeric(WV_attr_map["DEF"].iat[-1])).to_list()

                PWAVE_DF = PWAVE_DF.iloc[goodrows,:]
                pwave_geom = PWAVE_SPDF.geometry.iloc[goodrows]
                PWAVE84_DF = PWAVE84_DF.iloc[goodrows,:]
                pwave84_geom = PWAVE84_SPDF.geometry.iloc[goodrows]
                PWAVERR_DF = PWAVERR_DF.iloc[goodrows,:]
                pwaverr_geom = PWAVERR_SPDF.geometry.iloc[goodrows]

                PWAVE_SPDF = gpd.GeoDataFrame(data=PWAVE_DF, geometry=pwave_geom, crs=pwave_geom.crs)
                PWAVE84_SPDF = gpd.GeoDataFrame(data=PWAVE84_DF, geometry=pwave84_geom, crs=pwave84_geom.crs)
                PWAVERR_SPDF = gpd.GeoDataFrame(data=PWAVERR_DF, geometry=pwaverr_geom, crs=pwaverr_geom.crs)

            lib.write_log('END STEP 2b.')
            step2b_elapsed = math.ceil(monotonic() - step2b_start)
            lib.write_log(f'Step 2b: {step2b_elapsed} sec elapsed')

		##############
		#  	STEP 2c - Attach surge to buildings
		# 	RESULTS:
		#		WSE.SPDF, _WSE.SHP
        step2c_start = monotonic()
        lib.write_log(' ')
        lib.write_log('BEGIN Step 2c. Attach Surge to Buildings')
        surge_attr_map = SWEL_attr_map.copy()
        out_tab = None
        # Get 3NN PSURGE
        lib.write_log('.run 3NN on surge nodes.')
        dfs = []
        for i in range(len(BUILDING_SPDF)):
            row = BUILDING_SPDF.iloc[i]
            # drop geometry column for attributes (mimics @data[this.row,])
            attr_row = row.drop(columns=[BUILDING_SPDF.geometry.name])
            # extract coordinates (mimics @coords[this.row,])
            coord_xy = (row.geometry.x, row.geometry.y)
            res = lib.attachWSELtoBUILDING3(attr_row, coord_xy, PSURGE_SPDF, surge_attr_map)
            dfs.append(res)
        out_tab = pd.concat(dfs, ignore_index=True)

        # format out.tab to remove factors and then make all columns numeric
        lib.write_log('.formatting table.')
        out_tab.apply(lambda col: col.astype(str) if col.dtype.name == "category" else col)
        out_tab = out_tab.apply(pd.to_numeric, errors="raise")

        if inputs.use_uncertainty:
            lib.write_log('.get PSURGE ERR.')
            surge_attr_map = SWEL_attr_map.copy()
            out_tab2 = None
            dfs = []
            # Get 3NN PSURGERR
            for i in range(len(BUILDING_SPDF)):
                row = BUILDING_SPDF.iloc[i]
                # drop geometry column for attributes (mimics @data[this.row,])
                attr_row = row.drop(labels=[BUILDING_SPDF.geometry.name])
                # extract coordinates (mimics @coords[this.row,])
                coord_xy = (row.geometry.x, row.geometry.y)
                res = lib.attachWSELtoBUILDING3(attr_row, coord_xy, PSURGERR_SPDF, SWERR_attr_map)
                dfs.append(res)
            out_tab2 = pd.concat(dfs, ignore_index=True)

            # format out.tab2 to remove factors and then make all columns numeric
            lib.write_log('.formatting table.')
            out_tab2.apply(lambda col: col.astype(str) if col.dtype.name == "category" else col)
            out_tab2 = out_tab2.apply(pd.to_numeric, errors="raise")

        else:
            out_tab2 = None

        out_tab3 = out_tab2.loc[:,["BID"]+SWERR_attr_map.query("DESC=='surge error'")["OUT"].to_list()]
        out_tab1 = out_tab.merge(out_tab3, on="BID", how="left")
        out_tab1 = out_tab1.sort_values(by="BID").reset_index(drop=True)

        # set building validity
        lib.write_log(".attributing building validity.")
		# find which buildings have good probability of being affected by max event
        tabWET = out_tab1.loc[:,["DEMFT", "s10000", "sx10000"]].apply(lambda row: 1-norm.cdf(row.iat[0], row.iat[1], row.iat[2]), axis=1, result_type="reduce")
        sel = tabWET.ge(0.05).to_list()
        out_tab1.loc[sel, 'VALID'] = 1

        # set building validity
        # find building elevations = -9999
        sel = out_tab1["DEMFT"].le(-999).tolist()
        out_tab1.loc[sel, 'VALID'] = 0

        # build output shapefile
		# no record filtering and no joining, so out points geometry = in points geometry
        WSE_SPDF = gpd.GeoDataFrame(data=out_tab1, geometry=BUILDING_SPDF['geometry'], crs=BUILDING_SPDF['geometry'].crs)

        # create a copy of WSE for writing output that converts NA to -99999 so ESRI SHP doesnt 
		# auto-convert NA to 0.
		# Erase copy with the trash collection
        WSE2_SPDF = WSE_SPDF.copy()
        WSE2_SPDF = WSE2_SPDF.fillna(pd.to_numeric(SWEL_attr_map["DEF"].iat[1]))

        # write output shapefile
        lib.write_log("Writing WSE SPDF to {0}".format(fr'{inputs.out_shp_path}\{inputs.proj_prefix}_WSE.shp'))
        WSE2_SPDF.to_file(fr'{inputs.out_shp_path}\{inputs.proj_prefix}_WSE.shp')

        lib.write_log('END STEP 2c.')
        step2c_elapsed = math.ceil(monotonic() - step2c_start)
        lib.write_log(f'Step 2c: {step2c_elapsed} sec elapsed')

		##############
		#  	STEP 2d - Attach waves to buildings
		# 	RESULTS:
		#		WAV.SPDF, _WAV.SHP
        step2d_start = monotonic()
        lib.write_log(' ')
        lib.write_log('BEGIN Step 2d. Attach Waves to Buildings')
        surge_attr_map = WV_attr_map.copy()
        out_tab = None
        # Get 3NN PSURGE
        lib.write_log('.run 3NN on surge nodes.')
        dfs = []
        for i in range(len(BUILDING_SPDF)):
            row = BUILDING_SPDF.iloc[i]
            # drop geometry column for attributes (mimics @data[this.row,])
            attr_row = row.drop(columns=[BUILDING_SPDF.geometry.name])
            # extract coordinates (mimics @coords[this.row,])
            coord_xy = (row.geometry.x, row.geometry.y)
            res = lib.attachWSELtoBUILDING3(attr_row, coord_xy, PWAVE_SPDF, surge_attr_map)
            dfs.append(res)
        out_tab = pd.concat(dfs, ignore_index=True)

        # format out.tab to remove factors and then make all columns numeric
        lib.write_log('.formatting table.')
        out_tab.apply(lambda col: col.astype(str) if col.dtype.name == "category" else col)
        out_tab = out_tab.apply(pd.to_numeric, errors="raise")

        if inputs.use_uncertainty:
            lib.write_log('.get PWAVE ERR.')
            surge_attr_map = WVERR_attr_map.copy()
            out_tab2 = None
            dfs = []
            for i in range(len(BUILDING_SPDF)):
                row = BUILDING_SPDF.iloc[i]
                # drop geometry column for attributes (mimics @data[this.row,])
                attr_row = row.drop(labels=[BUILDING_SPDF.geometry.name])
                # extract coordinates (mimics @coords[this.row,])
                coord_xy = (row.geometry.x, row.geometry.y)
                res = lib.attachWSELtoBUILDING3(attr_row, coord_xy, PWAVERR_SPDF, surge_attr_map)
                dfs.append(res)
            out_tab2 = pd.concat(dfs, ignore_index=True)

            # format out.tab2 to remove factors and then make all columns numeric
            lib.write_log('.formatting table.')
            out_tab2.apply(lambda col: col.astype(str) if col.dtype.name == "category" else col)
            out_tab2 = out_tab2.apply(pd.to_numeric, errors="raise")

        else:
            out_tab2 = None

        out_tab3 = out_tab2.loc[:,["BID"]+WVERR_attr_map.query("DESC=='wave error'")["OUT"].to_list()]
        out_tab1 = out_tab.merge(out_tab3, on="BID", how="left")
        out_tab1 = out_tab1.sort_values(by="BID").reset_index(drop=True)

        #out_tab1['geometry'] = BUILDING_SPDF['geometry']
        WV_SPDF = gpd.GeoDataFrame(data=out_tab1, geometry=BUILDING_SPDF['geometry'], crs=BUILDING_SPDF['geometry'].crs)

        # write output shapefile
        lib.write_log("Writing WAV SPDF to {0}".format(fr'{inputs.out_shp_path}\{inputs.proj_prefix}_WAV.shp'))
        WV_SPDF.to_file(fr'{inputs.out_shp_path}\{inputs.proj_prefix}_WAV.shp')

        lib.write_log('END STEP 2d.')
        step2d_elapsed = math.ceil(monotonic() - step2d_start)
        lib.write_log(f'Step 2d: {step2d_elapsed} sec elapsed')


        ##############
        #  	STEP 3a
        #		Prep data, reduce buildings, assign DDFs
        # 	RESULTS:
        #		FULL.SPDF = wse + wav
        #		VAL.SPDF = reduced dataset
        #		PREP.SPDF, _PREP.SHP
        step3a_start = monotonic()
        lib.write_log(" ")
        lib.write_log("BEGIN STEP 3. Data Prep.")

        # BUILD the WSE Attribute map for tracking field names
        lib.write_log(".creating WSE attribute map.")
        wse_attr_out = inputs.bldg_attr_map.query("DESC == 'new building id'")["OUT"].to_list()+inputs.bldg_attr_map.query("DESC == 'ground elevation'")["OUT"].to_list()+["VALID","spt1","spt2","spt3"]+SWEL_attr_map.query("DDC == 1")["OUT"].to_list()+SWERR_attr_map.query("DDC == 1")["OUT"].to_list()+WV_attr_map.query("DDC == 1")["OUT"].to_list()+WVERR_attr_map.query("DDC == 1")["OUT"].to_list()
        wse_attr_desc = ["new building id","ground elevation","valid for analysis","NNsurgeID","NNsurgeID","NNsurgeID"]+["surge elevation" for i in range(SWEL_attr_map.query("DDC == 1").shape[0])]+["surge error" for i in range(SWERR_attr_map.query("DDC == 1").shape[0])]+["wave height" for i in range(WV_attr_map.query("DDC == 1").shape[0])]+["wave error" for i in range(WVERR_attr_map.query("DDC == 1").shape[0])]
        wse_attr_prep = [1,0,0,0,0,0]+[1 for i in range(SWEL_attr_map.query("DESC == 'surge elevation' and DDC==1").shape[0])]+[1 for i in range(SWERR_attr_map.query("DESC == 'surge error' and DDC==1").shape[0])]+[1 for i in range(WV_attr_map.query("DESC == 'wave height' and DDC==1").shape[0])]+[1 for i in range(WVERR_attr_map.query("DESC == 'wave error' and DDC==1").shape[0])]
        wse_attr_join = [1,0,0,0,0,0]+[1 for i in range(SWEL_attr_map.query("DESC == 'surge elevation' and DDC==1").shape[0])]+[1 for i in range(SWERR_attr_map.query("DESC == 'surge error' and DDC==1").shape[0])]+[1 for i in range(WV_attr_map.query("DESC == 'wave height' and DDC==1").shape[0])]+[1 for i in range(WVERR_attr_map.query("DESC == 'wave error' and DDC==1").shape[0])]
        wse_attr_map = pd.DataFrame({"OUT":wse_attr_out, "DESC":wse_attr_desc, "PREP":wse_attr_prep, "JOIN":wse_attr_join})

        # append wave data to swel data	
        lib.write_log(".appending WAV to SWEL.")
        WSE_tab = WSE_SPDF.drop('geometry', axis=1)
        WAV_tab = WV_SPDF.drop('geometry', axis=1)

        full_tab = WSE_tab.join(WAV_tab.loc[:,["BID"]+WV_attr_map.query("DESC=='wave height'")["OUT"].to_list()+WVERR_attr_map.query("DESC=='wave error'")["OUT"].to_list()].set_index("BID"), on="BID", how='left')
        full_tab.sort_values(by="BID", inplace=True)
        shp_geom = BUILDING_SPDF.geometry
        shp_proj = BUILDING_SPDF.crs
        FULL_SPDF = gpd.GeoDataFrame(data=full_tab, geometry=shp_geom, crs=shp_proj)

        lib.write_log(".filtering Buildings based on validity.")
        # select previously identified valid points
        sel = FULL_SPDF["VALID"].eq(1).to_list()
        shp_geom = FULL_SPDF.iloc[sel, FULL_SPDF.columns.get_loc('geometry')]
        shp_proj = FULL_SPDF.crs
        VAL_SPDF = gpd.GeoDataFrame(data=full_tab.iloc[sel,:], geometry=shp_geom, crs=shp_proj)
        lib.write_log(f"FULL BLDG: {FULL_SPDF.shape[0]}")
        lib.write_log(f"Reduced BLDG: {VAL_SPDF.shape[0]}")

        lib.write_log(".grabbing required building attributes.")
        # because VAL.SPDF is filtered, need to similarly filter the building file
        # get the IDs of the valid buildings
        sel = BUILDING_SPDF["BID"].isin(VAL_SPDF['BID'].to_list()).to_list()
        # grab the building attribute table and keep only the records corresponding to valid buildings
        BUILDING_tab = BUILDING_SPDF.drop(columns='geometry')
        bldgred_tab = BUILDING_tab.iloc[sel, BUILDING_SPDF.columns.get_indexer(inputs.bldg_attr_map.query("ANLYS==1")["OUT"].to_list())].copy()
        # add sort field because bldg.tab and bldg.coords are 1:1 and .tab might get jumbled in future merges or joins
        bldgred_tab["sort"] = [i for i in range(bldgred_tab.shape[0])]

        # DDFS
        # Assign DDFs to buildings
        lib.write_log(".assigning coastal DDFs.")
        # Select Four TASK4 DDFs, 1 for freshwater intrusion, 1 each for low-wave, med-wave, and
        # high-wave conditions
        DDF_tab = lib.assign_TASK4_DDFs(inputs, bldgred_tab)
        bldgred_tab["DDF1"] = DDF_tab["DDF1"]
        bldgred_tab["DDF2"] = DDF_tab["DDF2"]
        bldgred_tab["DDF3"] = DDF_tab["DDF3"]
        bldgred_tab["DDF4"] = DDF_tab["DDF4"]

        # placeholder for future Erosion DDFs
        bldgred_tab["DDFE"] = 0

        lib.write_log(".creating PREP attribute map.")
        # this attribute map will hold the field names and their descriptions for the PREP attribute table to be written in this section. 
        prep_attr_out = inputs.bldg_attr_map.query("ANLYS==1")["OUT"].to_list()+["DDF1","DDF2","DDF3","DDF4","DDFE"]+SWEL_attr_map.query("DDC == 1")["OUT"].to_list()+SWERR_attr_map.query("DDC == 1")["OUT"].to_list()+WV_attr_map.query("DDC == 1")["OUT"].to_list()+WVERR_attr_map.query("DDC == 1")["OUT"].to_list()
        prep_attr_desc = inputs.bldg_attr_map.query("ANLYS==1")["DESC"].to_list()+["DDF ID" for i in range(4)]+["DDF Erosion"]+["surge elevation" for i in range(SWEL_attr_map.query("DDC == 1").shape[0])]+["surge error" for i in range(SWERR_attr_map.query("DDC == 1").shape[0])]+["wave height" for i in range(WV_attr_map.query("DDC == 1").shape[0])]+["wave error" for i in range(WVERR_attr_map.query("DDC == 1").shape[0])]
        prep_attr_map = pd.DataFrame({"OUT":prep_attr_out, "DESC":prep_attr_desc})

        # merge wsels to bldg attributes and format
        VAL_tab = VAL_SPDF.drop('geometry', axis=1)
        prep_tab = bldgred_tab.join(VAL_tab.loc[:,wse_attr_map.query("PREP==1")["OUT"].to_list()].set_index("BID"), on="BID", how='left')

        # sort to ensure proper order with VAL.SPDF@coords, then remove any extra field
        prep_tab.sort_values("sort",inplace=True)
        prep_tab.reset_index(inplace=True, drop=True)
        prep_tab.drop(columns=["sort"], inplace=True)

        # make sure all values in DDFE are 0 
        # added because some values in DDFE kept being incorrectly set to null

        # build output SPDF
        lib.write_log("Sample PREP data:")
        lib.write_log(prep_tab.head())

        # create a copy of PREP for writing output that converts NA to -99999 so ESRI SHP doesnt
        # auto-convert NA to 0.
        # Erase copy with the trash collection
        shp_geom = VAL_SPDF['geometry'].reset_index(drop=True)
        shp_proj = VAL_SPDF.crs
        PREP_SPDF = gpd.GeoDataFrame(data=prep_tab, geometry=shp_geom, crs=shp_proj)

        # write output shapefile
        prep2_tab = prep_tab.copy()
        prep2_tab.mask(prep2_tab.isna(), pd.to_numeric(SWEL_attr_map.iat[1,SWEL_attr_map.columns.get_loc("DEF")]), inplace=True)
        PREP2_SPDF = gpd.GeoDataFrame(data=prep2_tab, geometry=PREP_SPDF.geometry, crs=PREP_SPDF.crs)
        out_shp_lay = inputs.proj_prefix + "_PREP"
        out_shp_dsn = os.path.join(inputs.out_shp_path, out_shp_lay+".shp")
        lib.write_log(f".writing Prep data to {out_shp_dsn}")
        PREP2_SPDF.to_file(out_shp_dsn)

        lib.write_log("END STEP 3.")
        step3_elapsed = math.ceil(monotonic()-step3a_start)
        lib.write_log(f"STEP 3: {step3_elapsed} sec elapsed")

        ##############
        #  	STEP 4a
        #	Run Monte Carlo simulations
        # 	RESULTS:
        #		pvals, pvals.csv
        ##############
        step4a_start = monotonic()
        lib.write_log(" ")
        lib.write_log("BEGIN STEP 4a. Create/Import Event Probabilities.")

        # load or create probabilities
        if inputs.use_stormsuite:
            lib.write_log("Loading pre-defined storm suite...")
            pvals = pd.read_csv(inputs.storm_csv, float_precision="round_trip")
            inputs.mc_n = pvals.shape[0]
            lib.write_log("Storm suite hard bounds:  not defined")
        else:
            lib.write_log("Creating random storm suite.")
            #choose N random values 0..1
            pvals = np.random.uniform(inputs.nbounds[0], inputs.nbounds[1], inputs.mc_n)
            pvals = pd.DataFrame(data=pvals, columns=['x'])
            lib.write_log(f"Storm suite hard bounds: {inputs.nbounds[0]}, {inputs.nbounds[1]}")

        lib.write_log(f"Storm suite soft bounds: {pvals.iloc[0].min()}, {pvals.iloc[0].max()}")
        lib.write_log(".writing PVALS csv.")
        pvals.to_csv(os.path.join(inputs.out_shp_path,"pvals.csv"), index=False)

        lib.write_log("END STEP 4a.")
        step4a_elapsed = math.ceil(monotonic()-step4a_start)
        lib.write_log(f"Step 4a: {step4a_elapsed} sec elapsed")

        ##############
	    #  	STEP 4b
	    #	run losses in parallel
	    # 	RESULTS:
	    # 		RESULTS.SPDF
	    ##############
        step4b_start = monotonic()
        lib.write_log(" ")
        lib.write_log("BEGIN STEP 4b. Determining Losses.")

        # if writing output as CSV, create the /TAB folder if it doesnt already exist
        if inputs.use_outcsv:
            lib.write_log(".looking for CSV folder.")
            out_csvtable_path = os.path.join(inputs.out_shp_path, "TAB")
            if not os.path.exists(out_csvtable_path):
                lib.write_log(".creating CSV folder.")
                os.mkdir(out_csvtable_path)

        # grab all building IDs for analysis
        allbldgs = PREP_SPDF["BID"]
        building_summary = pd.DataFrame()
        prep_tab = PREP_SPDF.drop(columns='geometry')

        lib.write_log(f"Calculating damage, losses, and annualized loss for {allbldgs.shape[0]} buildings...")
        for this_building in allbldgs.to_list():
            temp_tab = lib.runMC_AALU_x4(prep_tab, pvals, int(this_building), inputs, prep_attr_map)
            building_summary = pd.concat([building_summary, temp_tab], ignore_index=True)
        building_summary.loc[:,"A"] = 1

        lossTables_elapsed = math.ceil(monotonic()-step4b_start)
        lib.write_log(f"loss tables: {lossTables_elapsed} sec elapsed")

        lib.write_log(".formatting AAL results.")
        # create a base to which to add run results AAL results
        base_tab = pd.DataFrame({"BID":BUILDING_SPDF["BID"], "ANLYS":list(range(1,BUILDING_SPDF["BID"].shape[0]+1))})

        # join
        # field names hard-coded in runMC_AALU_x
        join_tab = base_tab.join(building_summary.set_index("BID"), on="BID", how="left").sort_values("ANLYS")
        join_tab.loc[:,"ANLYS"] = 0
        join_tab.iloc[join_tab["A"].eq(1).to_list(), join_tab.columns.get_loc("ANLYS")] = 1
        join_tab.drop(columns=join_tab.columns.to_list()[-1], inplace=True)

        # replace NAs
        join_tab.iloc[join_tab["BAAL"].isna().to_list(), join_tab.columns.get_loc("BAAL")] = 0
        join_tab.iloc[join_tab["BAALmin"].isna().to_list(), join_tab.columns.get_loc("BAALmin")] = 0
        join_tab.iloc[join_tab["BAALmax"].isna().to_list(), join_tab.columns.get_loc("BAALmax")] = 0
        join_tab.iloc[join_tab["FLAG_DF16"].isna().to_list(), join_tab.columns.get_loc("FLAG_DF16")] = 0

        # add run results to results table
        building_tab = BUILDING_SPDF.drop(columns='geometry')
        results_tab = building_tab.join(join_tab.set_index("BID"), on="BID", how="left").sort_values("BID")

        # build output SPDF for heatmap
        shp_geom = BUILDING_SPDF.geometry
        shp_proj = BUILDING_SPDF.crs
        RESULTS_SPDF = gpd.GeoDataFrame(data=results_tab, geometry=shp_geom, crs=shp_proj)

        # write output shapefile
        out_shp_lay = inputs.proj_prefix + "_RESULTS"
        out_shp_dsn = os.path.join(inputs.out_shp_path, out_shp_lay+".shp")
        RESULTS_SPDF.to_file(out_shp_dsn)

        # print results
        lib.finalReportAAL2(results_tab, prep_attr_map)
        lib.write_log("END STEP 4b.")
        step4b_elapsed = math.ceil(monotonic()-step4b_start)
        lib.write_log(f"Step 4b: {step4b_elapsed} sec elapsed")



        ##############
        #  	STEP 5
        #		create heatmap from RESULTS.SPDF.  Uses the best estimate Building AAL only.
        #		grid value units = $ per acre
        # 	RESULTS:
        # 		kde.grid, _.tif
        ##############
        step5_start = monotonic()
        lib.write_log(" ")
        lib.write_log("STEP 5. Creating heatmap...")

        if inputs.use_heatmap:
            shp_geom = RESULTS_SPDF.geometry
            shp_proj = RESULTS_SPDF.geometry.crs

            # conversion for sq ft to acres
            sqft2acres = 43560

            lib.write_log(".calculate new grid dimensions.")
            # create a blank raster
            # get extents of points data
            coords_tab = shp_geom.get_coordinates(ignore_index=True)
            coords_mat = coords_tab.to_numpy()
            minx = coords_tab['x'].min()
            maxx = coords_tab['x'].max()
            miny = coords_tab['y'].min()
            maxy = coords_tab['y'].max()
            # determine columns and rows
            spanx = maxx - minx
            spany = maxy - miny
            num_cols = math.ceil(spanx/inputs.hm_resolution)+1
            num_rows = math.ceil(spany/inputs.hm_resolution)+1
            # determine starting X/Y
            gridx = minx - (((num_cols * inputs.hm_resolution) - spanx) / 2)
            gridy = miny + (((num_rows * inputs.hm_resolution) + spany) / 2)

            # create starting grid with value 0
            lib.write_log(".create new empty grid.")
            x = np.linspace(start=gridx, stop=(gridx+(num_cols*inputs.hm_resolution)), num=num_cols)
            y = np.linspace(start=gridy, stop=(gridy+(num_rows*inputs.hm_resolution)), num=num_rows)
            X,Y = np.meshgrid(x,y)

            aal_tab = RESULTS_SPDF.drop(columns='geometry')
            aal_field = 'BAAL'
            # create an empty results matrix to store kde calculations
            kde_mat = np.zeros(shape=X.shape).astype("float32")
            out_hm_path = os.path.join(inputs.out_shp_path, f"{inputs.proj_prefix}_{inputs.hm_name}.tif")
            hm_transform = rasterio.transform.Affine.translation(gridx, gridy) * rasterio.transform.Affine.scale(inputs.hm_resolution, -inputs.hm_resolution)

            with rasterio.open(
                out_hm_path,
                'w',
                driver='GTiff',
                height=kde_mat.shape[0],
                width=kde_mat.shape[1],
                count=1,
                dtype=kde_mat.dtype,
                crs=shp_proj,
                transform=hm_transform
            ) as hm:
                hm.write(kde_mat, 1)
                for i in range(kde_mat.shape[0]):
                    for j in range(kde_mat.shape[1]):
                        # get xcoord and ycoord of cell
                        cell_centroid = np.array(hm.xy(i,j)).reshape((1,2))

                        #calc NN
                        bpt_dist = scipy.spatial.distance.cdist(coords_mat, cell_centroid, metric='euclidean')

                        # filter for points that are inside the search radius
                        bpt_df = pd.DataFrame({"BID":aal_tab["BID"],
                                                "AAL":aal_tab[aal_field], 
                                                "Dist":bpt_dist.flatten().tolist()})
                        bpt_sel = bpt_df.query(f"Dist <= {inputs.hm_bandwidth}").copy()

                        # calculate density of filtered points and convert to acres
                        #	if filter is size 0, then move on.
                        if bpt_sel.shape[0] > 0:
                            kde_mat[i,j] = lib.calcKernelDensity(bpt_sel, inputs.hm_bandwidth) * sqft2acres
                        else:
                            continue

                lib.write_log(".writing heatmap.")
                # transfer results matrix to the geo-raster
                hm.write(kde_mat, 1)
        else:
            lib.write_log("Heatmap skipped. Not requested by user.")

        lib.write_log("END STEP 5.")
        step5_elapsed = math.ceil(monotonic()-step5_start)
        lib.write_log(f"Step 5: {step5_elapsed} sec elapsed")

        #################
        # end parallel processing
        lib.write_log(" ")
        lib.write_log("AAL computations complete.")
        fullAnalysis_elapsed = math.ceil(monotonic()-fullAnalysis_start)
        lib.write_log(f"Full Analysis: {fullAnalysis_elapsed} sec elapsed")
        lib.write_log("**********")
        logging.shutdown()